diff --git a/index.html b/index.html index 77b656cbf..95c3604bc 100644 --- a/index.html +++ b/index.html @@ -1,7 +1,7 @@ - + Harvester manual test cases diff --git a/index.xml b/index.xml index 983e82c6a..036feb298 100644 --- a/index.xml +++ b/index.xml @@ -12,4018 +12,4018 @@ <link>https://harvester.github.io/tests/manual/deployment/1218-http-proxy-setting-harvester/</link> <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate> <guid>https://harvester.github.io/tests/manual/deployment/1218-http-proxy-setting-harvester/</guid> - <description>Related issue: #1218 Missing http proxy settings on rke2 and rancher pod Environment setup Setup an airgapped harvester Clone ipxe example repository https://github.com/harvester/ipxe-examples Edit the setting.xml file under vagrant ipxe example Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Verification Steps Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Create a virtual machine Prepare an S3 account with Bucket, Bucket region, Access Key ID and Secret Access Key Setup backup target in settings Edit virtual machine and take backup ssh to server node with user rancher Run kubectl create deployment nginx --image=nginx:latest on Harvester cluster Run kubectl get pods Expected Results At Step 2, Can download and create image from URL without error At step 6, Can backup running VM to external S3 storage correctly At step 6, Can delete backup from external S3 correctly At step 9, Can pull image from internet and deploy nginx pod in running status harvester-node-0:/home/rancher # kubectl create deployment nginx --image=nginx:latest deployment.</description> + <description><ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1218">#1218</a> Missing http proxy settings on rke2 and rancher pod</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Clone ipxe example repository <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit the <code>setting.xml</code> file under vagrant ipxe example</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr</description> </item> <item> <title> https://harvester.github.io/tests/manual/harvester-rancher/1330-rancher-import-harvester-enhacement/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1330-rancher-import-harvester-enhacement/ - Related issues: #1330 Http proxy setting download image Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head Create an one node harvester cluster Both harvester and rancher have internet connection Verification Steps Access rancher dashboard Open Virtualization Management page Import existing harvester Copy the registration url Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Access harvester dashboard Edit cluster-registration-url in settings Paste the registration url and save Back to rancher and wait for harvester imported in Rancher Expected Results Harvester can be imported in rancher dashboard with running status Can access harvester in virtual machine page Can create harvester cloud credential Can load harvester cloud credential while creating harvester + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1330">#1330</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head </code></pre><ol start="2"> <li>Create an one node harvester cluster</li> <li>Both harvester and rancher have internet connection</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access rancher dashboard</li> <li>Open Virtualization Management page</li> <li>Import existing harvester</li> <li>Copy the registration url <img src="https://user-images.githubusercontent.com/29251855/143001156-31b06586-9b66-4016-a0f5-6dca92a7b2f6.png" alt="image"></li> <li>Create image from URL (change folder date to latest) <a href="https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img</a></li> <li>Access harvester dashboard</li> <li>Edit <code>cluster-registration-url</code> in settings <img src="https://user-images.githubusercontent.com/29251855/143771558-01398c11-8e3f-40c1-903e-2817cade80c8.png" alt="image"></li> <li>Paste the registration url and save</li> <li>Back to rancher and wait for harvester imported in Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester can be imported in rancher dashboard with <code>running</code> status</li> <li>Can access harvester in virtual machine page</li> <li>Can create harvester cloud credential</li> <li>Can load harvester cloud credential while creating harvester</li> </ol> https://harvester.github.io/tests/manual/live-migration/1401-support-volume-hot-unplug-live-migrate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/1401-support-volume-hot-unplug-live-migrate/ - Related issues: #1401 Http proxy setting download image Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Verification Steps Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged. Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click Detach volume Add volume again Migrate VM from one node to another Detach volume Add unplugged volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h5 id="scenario2-live-migrate-vm-not-have-hot-plugged-volume-before-do-hot-plugged-the-unplugged">Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged.</h5> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click Detach volume</li> <li>Add volume again</li> <li>Migrate VM from one node to another</li> <li>Detach volume</li> <li>Add unplugged volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> https://harvester.github.io/tests/manual/volumes/1401-support-volume-hot-unplug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1401-support-volume-hot-unplug/ - Related issues: #1401 Http proxy setting download image Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it Verification Steps Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click de-attach volume Add volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h5 id="scenario1-live-migrate-vm-already-have-hot-plugged-volume-to-new-node-then-detach-hot-unplug-it">Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it</h5> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click de-attach volume</li> <li>Add volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> 02-Integrate to Rancher from Harvester settings (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/02-integrate-rancher-from-harvester-settings/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/02-integrate-rancher-from-harvester-settings/ - Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head Create an one node harvester cluster Both harvester and rancher have internet connection Verification Steps Access rancher dashboard Open Virtualization Management page Import existing harvester Copy the registration url Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Access harvester dashboard Edit cluster-registration-url in settings Paste the registration url and save Back to rancher and wait for harvester imported in Rancher Expected Results Harvester can be imported in rancher dashboard with running status Can access harvester in virtual machine page Can create harvester cloud credential Can load harvester cloud credential while creating harvester + <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head </code></pre><ol start="2"> <li>Create an one node harvester cluster</li> <li>Both harvester and rancher have internet connection</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access rancher dashboard</li> <li>Open Virtualization Management page</li> <li>Import existing harvester</li> <li>Copy the registration url <img src="https://user-images.githubusercontent.com/29251855/143001156-31b06586-9b66-4016-a0f5-6dca92a7b2f6.png" alt="image"></li> <li>Create image from URL (change folder date to latest) <a href="https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img</a></li> <li>Access harvester dashboard</li> <li>Edit <code>cluster-registration-url</code> in settings <img src="https://user-images.githubusercontent.com/29251855/143771558-01398c11-8e3f-40c1-903e-2817cade80c8.png" alt="image"></li> <li>Paste the registration url and save</li> <li>Back to rancher and wait for harvester imported in Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester can be imported in rancher dashboard with <code>running</code> status</li> <li>Can access harvester in virtual machine page</li> <li>Can create harvester cloud credential</li> <li>Can load harvester cloud credential while creating harvester</li> </ol> 03-Manage VM in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/03-manage-vm-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/03-manage-vm-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Virtual Machine page Create a single instance virtual machine in Virtual Machines page Create multiple 3 instances virtual machines in Virtual Machines page Access and check virtual machine details Edit cpu, memory and network of one virtual machine Try Stop, Restart and Migrate virtual machine Try Clone virtual machine Try Delete virtual machine Use VM Template to create VM Expected Results Can create a single instance vm correctly Can create multiple instances vm correctly Can display all virtual machine information Can change cpu, memory and network and restart vm correctly Can Stop, Restart and Migrate virtual machine correctly Can Clone virtual machine correctly Can Delete virtual machine correctly + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Virtual Machine</code> page</li> <li>Create a single instance virtual machine in <code>Virtual Machines</code> page</li> <li>Create multiple 3 instances virtual machines in <code>Virtual Machines</code> page</li> <li>Access and check virtual machine details</li> <li>Edit cpu, memory and network of one virtual machine</li> <li>Try <code>Stop</code>, <code>Restart</code> and <code>Migrate</code> virtual machine</li> <li>Try <code>Clone</code> virtual machine</li> <li>Try <code>Delete</code> virtual machine</li> <li><code>Use VM Template</code> to create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create a single instance vm correctly</li> <li>Can create multiple instances vm correctly</li> <li>Can display all virtual machine information</li> <li>Can change cpu, memory and network and restart vm correctly</li> <li>Can <code>Stop</code>, <code>Restart</code> and <code>Migrate</code> virtual machine correctly</li> <li>Can <code>Clone</code> virtual machine correctly</li> <li>Can <code>Delete</code> virtual machine correctly</li> </ol> 04-Manage Node in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/04-manage-host-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/04-manage-host-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Host page Access and check node details Edit node config, change network and add disk Try to Cordon and decordon node Enable and disable Maintenance mode Expected Results Can diaply all node&rsquo;s information Can add disk to node correctly Can change network of node correctly Can Cordon and decordon node correctly Can enable and disable Maintenance mode + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Host</code> page</li> <li>Access and check node details</li> <li>Edit node config, change network and add disk</li> <li>Try to <code>Cordon</code> and <code>decordon</code> node</li> <li>Enable and disable <code>Maintenance mode</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can diaply all node&rsquo;s information</li> <li>Can add disk to node correctly</li> <li>Can change network of node correctly</li> <li>Can <code>Cordon</code> and <code>decordon</code> node correctly</li> <li>Can enable and disable <code>Maintenance mode</code></li> </ol> 05-Manage Image in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/05-manage-image-volume-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/05-manage-image-volume-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Images page Create an image from URL Create an image from file Delete created images Expected Results Can create an image from URL Can create an image from file Can create an image from file Can delete created images correctly + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Images</code> page</li> <li>Create an image from URL</li> <li>Create an image from file</li> <li>Delete created images</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create an image from URL</li> <li>Can create an image from file</li> <li>Can create an image from file</li> <li>Can delete created images correctly</li> </ol> 06-Manage Network in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/06-manage-network-in-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/06-manage-network-in-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Network page Create an new virtual network Create a new virtual machine using the new virtual network Delete a virtual network Expected Results Can create an new virtual network Create create a new virtual machine using the new virtual network Virtual machine can retrieve ip address Can delete a virtual network + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Network</code> page</li> <li>Create an new virtual network</li> <li>Create a new virtual machine using the new virtual network</li> <li>Delete a virtual network</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create an new virtual network</li> <li>Create create a new virtual machine using the new virtual network</li> <li>Virtual machine can retrieve ip address</li> <li>Can delete a virtual network</li> </ol> 07-Add and grant project-owner user to harvester (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/07-rbac-add-grant-project-owner-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/07-rbac-add-grant-project-owner-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-owner and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-owner user Assign Owner role to it Logout current user from Rancher Login with project-owner Open harvester from Virtualization Management page Expected Results Can create project-owner and set password Can assign Owner role to project-owner in default Can login correctly with project-owner Can manage all default project resources including host, virtual machines, volumes, VM and network + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-owner</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-owner user</li> <li>Assign <code>Owner</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f3bb7b2d-f687-4cc0-bb98-f286f45ea17b" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-owner</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-owner</code> and set password</li> <li>Can assign <code>Owner</code> role to <code>project-owner</code> in default</li> <li>Can login correctly with <code>project-owner</code></li> <li>Can manage all <code>default</code> project resources including host, virtual machines, volumes, VM and network</li> </ol> 08-Add and grant project-readonly user to harvester https://harvester.github.io/tests/manual/harvester-rancher/08-rbac-add-grant-project-readonly-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/08-rbac-add-grant-project-readonly-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-readonly and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-readonly user Assign Read Only role to it Logout current user from Rancher Login with project-readonly Open harvester from Virtualization Management page Expected Results Can create project-readonly and set password Can assign Read Only role to project-readonly in default Can login correctly with project-readonly Can&rsquo;t see Host page in harvester Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-readonly</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-readonly user</li> <li>Assign <code>Read Only</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/0effd0f6-6e20-4415-801b-03c4c6294a24" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-readonly</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-readonly</code> and set password</li> <li>Can assign <code>Read Only</code> role to <code>project-readonly</code> in default</li> <li>Can login correctly with <code>project-readonly</code></li> <li>Can&rsquo;t see <code>Host</code> page in harvester</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip;</li> </ol> 09-Add and grant project-member user to harvester https://harvester.github.io/tests/manual/harvester-rancher/09-rbac-add-grant-project-member-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/09-rbac-add-grant-project-member-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-member and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-member user Assign Member role to it Logout current user from Rancher Login with project-member Open harvester from Virtualization Management page Expected Results Can create project-member and set password Can assign Member role to project-member in default Can login correctly with project-member Can&rsquo;t see Host page in harvester Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-member</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-member user</li> <li>Assign <code>Member</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cac6a089-833c-4d37-b0da-bd0ad08677c1" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-member</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-member</code> and set password</li> <li>Can assign <code>Member</code> role to <code>project-member</code> in default</li> <li>Can login correctly with <code>project-member</code></li> <li>Can&rsquo;t see <code>Host</code> page in harvester</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip;</li> </ol> 10-Add and grant project-custom user to harvester https://harvester.github.io/tests/manual/harvester-rancher/10--rbacadd-grant-project-custom-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/10--rbacadd-grant-project-custom-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-custom and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-custom user Assign Custom role to it Set Create Namespace, Manage Volumes and View Volumes Logout current user from Rancher Login with project-custom Open harvester from Virtualization Management page Expected Results Can create project-custom and set password Can assign Custom role to project-custom in default Can login correctly with project-custom Can do Create Namespace, Manage Volumes and View Volumes in default project + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-custom</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-custom user</li> <li>Assign <code>Custom</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/70098173-d9b5-43f5-85ab-5011f8c7d7c0" alt="image.png"></p> <ol> <li>Set <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code></li> <li>Logout current user from Rancher</li> <li>Login with <code>project-custom</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-custom</code> and set password</li> <li>Can assign <code>Custom</code> role to <code>project-custom</code> in default</li> <li>Can login correctly with <code>project-custom</code></li> <li>Can do <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code> in default project</li> </ol> 11-Create New Project in Harvester https://harvester.github.io/tests/manual/harvester-rancher/11-create-project-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/11-create-project-harvester/ - Open harvester from Virtualization Management page Click Projects/Namespaces Click Create Project Set CPU and Memory limit in Resource Quotas Change view to testProject only Create some images Create some volumes Create a virtual machine Expected Results Can creat project correctly in Projects/Namespaces page Can create images correctly Can create volumes correctly Can create virtual machine correctly + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Click Create Project</li> <li>Set CPU and Memory limit in <code>Resource Quotas</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4758318c-6e47-459e-95ef-5288c0a95d2a" alt="image.png"></p> <ol> <li>Change view to <code>testProject</code> only</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/3d0bc57b-ba09-44d4-9de1-8cc14ee87e0a" alt="image.png"></p> <ol> <li>Create some images</li> <li>Create some volumes</li> <li>Create a virtual machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can creat project correctly in <code>Projects/Namespaces</code> page</li> <li>Can create images correctly</li> <li>Can create volumes correctly</li> <li>Can create virtual machine correctly</li> </ol> 13-Add and grant project-owner user to custom project https://harvester.github.io/tests/manual/harvester-rancher/13-rbac-add-grant-project-owner-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/13-rbac-add-grant-project-owner-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-owner user Assign Owner role to it Logout current user from Rancher Login with project-owner Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Owner role to project-owner in testProject project Can manage all testProject project resources including host, virtual machines, volumes, VM and network + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-owner user</li> <li>Assign <code>Owner</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-owner</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Owner</code> role to <code>project-owner</code> in <code>testProject</code> project</li> <li>Can manage all <code>testProject</code> project resources including host, virtual machines, volumes, VM and network</li> </ol> 14-Add and grant project-readonly user to custom project https://harvester.github.io/tests/manual/harvester-rancher/14-rbac-add-grant-project-readonly-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/14-rbac-add-grant-project-readonly-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-readonly user Assign Read Only role to it Logout current user from Rancher Login with project-readonly Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Read Only role to in testProject project Can login correctly with project-readonly Can&rsquo;t see Host page in testProject only view Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in testProject only view + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-readonly user</li> <li>Assign <code>Read Only</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-readonly</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Read Only</code> role to in <code>testProject</code> project</li> <li>Can login correctly with <code>project-readonly</code></li> <li>Can&rsquo;t see <code>Host</code> page in <code>testProject</code> only view</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in <code>testProject</code> only view</li> </ol> 15-Add and grant project-member user to custom project https://harvester.github.io/tests/manual/harvester-rancher/15-rbac-add-grant-project-member-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/15-rbac-add-grant-project-member-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-member user Assign Member role to it Logout current user from Rancher Login with project-member Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Member role to project-member in testProject project Can login correctly with project-member Can&rsquo;t see Host page in testProject project Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in testProject project + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-member user</li> <li>Assign <code>Member</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-member</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Member</code> role to <code>project-member</code> in <code>testProject</code> project</li> <li>Can login correctly with <code>project-member</code></li> <li>Can&rsquo;t see <code>Host</code> page in <code>testProject</code> project</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in <code>testProject</code> project</li> </ol> 16-Add and grant project-custom user to custom project https://harvester.github.io/tests/manual/harvester-rancher/16-rbac-add-grant-project-custom-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/16-rbac-add-grant-project-custom-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-custom user Assign Custom role to it Set Create Namespace, Manage Volumes and View Volumes Logout current user from Rancher Login with project-custom Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Custom role to project-custom in testProject project Can login correctly with project-custom Can do Create Namespace, Manage Volumes and View Volumes in testProject project + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-custom user</li> <li>Assign <code>Custom</code> role to it</li> <li>Set <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code></li> <li>Logout current user from Rancher</li> <li>Login with <code>project-custom</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Custom</code> role to <code>project-custom</code> in <code>testProject</code> project</li> <li>Can login correctly with <code>project-custom</code></li> <li>Can do <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code> in <code>testProject</code> project</li> </ol> 17-Delete Imported Harvester Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/17-delete-imported-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/17-delete-imported-harvester-cluster/ - Finish 01-Import existing Harvester clusters in Rancher Open Virtualization Management page Delete already imported harvester Expected Results Can delete imported harvester correctly + <ol> <li>Finish 01-Import existing Harvester clusters in Rancher</li> <li>Open <code>Virtualization Management</code> page</li> <li>Delete already imported harvester</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can delete imported harvester correctly</li> </ol> 18-Delete Failed Imported Harvester Cluster https://harvester.github.io/tests/manual/harvester-rancher/18-delete-failed-imported-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/18-delete-failed-imported-harvester-cluster/ - Make failure in 01-Import existing Harvester clusters in Rancher Open Virtualization Management page Delete already imported harvester Expected Results Can delete imported harvester correctly + <ol> <li>Make failure in 01-Import existing Harvester clusters in Rancher</li> <li>Open <code>Virtualization Management</code> page</li> <li>Delete already imported harvester</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can delete imported harvester correctly</li> </ol> 20-Create RKE1 Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/20-create-rke1-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/20-create-rke1-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Expand RKE1 Configuration Add Template in Node template Select Harvester Select created cloud credential created Select default namespace Select ubuntu image Select network: vlan1 Provide SSH User: ubuntu Provide template name, click create Open Cluster page, click Create Toggle RKE1 Provide cluster name Provide Name Prefix + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Expand RKE1 Configuration</li> <li>Add Template in <code>Node template</code></li> <li>Select Harvester</li> <li>Select created cloud credential created</li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network: <code>vlan1</code></li> <li>Provide SSH User: <code>ubuntu</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/19ca6b90-4688-4ff3-8ecd-60982edf1950" alt="image.png"></p> <p><img src="https://user-images.githubusercontent.com/29251855/147911503-df997d2f-fa48-4ce9-876b-f309b1d6c7b1.png" alt="image"></p> <ol> <li> <p>Provide template name, click create <img src="https://user-images.githubusercontent.com/29251855/147911570-7868367e-7729-4c4d-bfef-01751c76ed75.png" alt="image"></p> 21-Delete RKE1 Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/21-delete-rke1-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/21-delete-rke1-kubernetes-cluster/ - Open Cluster Management Check provisioned RKE1 cluster Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned RKE1 cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 22-Create RKE2 Kubernetes Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/22-create-rke2-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/22-create-rke2-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click create and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click create and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cbd9cc9b-60fb-4e81-985a-13fcaa88fa2f" alt="image.png"></p> <ol> <li>Wait for RKE2 cluster provisioning complete (~20min)</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Provision RKE2 cluster successfully with <code>Running</code> status</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4526b95b-71f4-498f-b509-dea60ec5e0e5" alt="image.png"></p> 23-Delete RKE2 Kubernetes Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/23-delete-rke2-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/23-delete-rke2-kubernetes-cluster/ - Open Cluster Management Check provisioned RKE2 cluster Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned RKE2 cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 24-Delete RKE1 Kubernetes Cluster in Provisioning https://harvester.github.io/tests/manual/harvester-rancher/24-delete-rke1-kubernetes-cluster-provisioning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/24-delete-rke1-kubernetes-cluster-provisioning/ - Provision RKE1 Cluster Management When RKE1 cluster show Provisioning Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE1 Cluster Management</li> <li>When RKE1 cluster show <code>Provisioning</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 25-Delete RKE1 Kubernetes Cluster in Failure https://harvester.github.io/tests/manual/harvester-rancher/25-delete-rke1-kubernetes-cluster-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/25-delete-rke1-kubernetes-cluster-failure/ - Provision RKE1 Cluster Management When RKE1 cluster displayed in Failure Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE1 Cluster Management</li> <li>When RKE1 cluster displayed in <code>Failure</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 26-Delete RKE2 Kubernetes Cluster in Provisioning https://harvester.github.io/tests/manual/harvester-rancher/26-delete-rke2-kubernetes-cluster-provisioning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/26-delete-rke2-kubernetes-cluster-provisioning/ - Provision RKE2 Cluster Management When RKE2 cluster show Provisioning Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE2 Cluster Management</li> <li>When RKE2 cluster show <code>Provisioning</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 27-Delete RKE2 Kubernetes Cluster in Failure https://harvester.github.io/tests/manual/harvester-rancher/27-delete-rke2-kubernetes-cluster-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/27-delete-rke2-kubernetes-cluster-failure/ - Provision RKE2 Cluster Management When RKE2 cluster displayed in Failure Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE2 Cluster Management</li> <li>When RKE2 cluster displayed in <code>Failure</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 30-Configure Harvester LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/30-configure-harvester-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/30-configure-harvester-loadbalancer-service/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters Provide Listening port and Target port Click Add-on Config Select Health Check port Select dhcp as IPAM mode Provide Health Check Threshold Provide Health Check Failure Threshold Provide Health Check Period Provide Health Check Timeout Click Create button Create another load balancer service with the name characters. + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> <ol> <li>Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters</li> <li>Provide Listening port and Target port</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873" alt="image.png"></p> 31-Specify "pool" IPAM mode in LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Access Harvester dashboard UI Go to Settings Create a vip-pool in Harvester settings. Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name Provide Listending port and Target port Click Add-on Config Provide Health Check port Select pool as IPAM mode Provide Health Check Threshold Provide Health Check Failure Threshold Provide Health Check Period Provide Health Check Timeout Click Create button Expected Results Can create load balance service correctly Can operate and route to deployed service correctly + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Access Harvester dashboard UI</li> <li>Go to Settings</li> <li>Create a vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> 32-Deploy Harvester CSI provider to RKE 1 Cluster https://harvester.github.io/tests/manual/harvester-rancher/32-deploy-harvester-csi-provider-to-rke1-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/32-deploy-harvester-csi-provider-to-rke1-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE1 node template Provision a RKE1 cluster, check the Harvester as cloud provider Access RKE1 cluster Open charts in Apps &amp; Market page Install Harvester CSI driver Make sure CSI driver installed complete NAME: harvester-csi-driver LAST DEPLOYED: Thu Dec 16 03:59:54 2021 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Successfully deployed Harvester CSI driver to the kube-system namespace. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify steps</h3> <ol> <li>Environment preparation as above steps</li> <li>Import harvester to rancher from harvester settings</li> <li>Create cloud credential</li> <li>Create RKE1 node template <img src="https://user-images.githubusercontent.com/29251855/146299688-3875c18f-61d6-48e6-a15e-250d59c177ba.png" alt="image"></li> <li>Provision a RKE1 cluster, check the <code>Harvester</code> as cloud provider <img src="https://user-images.githubusercontent.com/29251855/146342214-568bf017-e0e2-4b3a-9f38-894eff77d439.png" alt="image"></li> <li>Access RKE1 cluster</li> <li>Open charts in Apps &amp; Market page</li> <li>Install Harvester CSI driver</li> <li>Make sure CSI driver installed complete</li> </ol> <pre tabindex="0"><code>NAME: harvester-csi-driver LAST DEPLOYED: Thu Dec 16 03:59:54 2021 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Successfully deployed Harvester CSI driver to the kube-system namespace. --------------------------------------------------------------------- SUCCESS: helm install --namespace=kube-system --timeout=10m0s --values=/home/shell/helm/values-harvester-csi-driver-100.0.0-up0.1.8.yaml --version=100.0.0+up0.1.8 --wait=true harvester-csi-driver /home/shell/helm/harvester-csi-driver-100.0.0-up0.1.8.tgz </code></pr 33-Deploy Harvester CSI provider to RKE 2 Cluster https://harvester.github.io/tests/manual/harvester-rancher/33-deploy-harvester-csi-provider-to-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/33-deploy-harvester-csi-provider-to-rke2-cluster/ - Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Check alread set Harvester as cloud provider Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services Check CSI driver installed and configured on RKE2 cluster + <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Check alread set <code>Harvester</code> as cloud provider</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/514d1d88-08e7-441a-861c-38bb3c96bbe7" alt="image.png"></p> <ol> <li>Click Create</li> <li>Wait for RKE2 cluster provisioning complete (~20min)</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Provision RKE2 cluster successfully with <code>Running</code> status</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4526b95b-71f4-498f-b509-dea60ec5e0e5" alt="image.png"></p> <ol> <li>Can acccess RKE2 cluster to check all resources and services</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/682dccdc-cc0b-427f-ab7a-fdfaa1f82e06" alt="image.png"></p> 34-Hot plug and unplug volumes in RKE1 cluster https://harvester.github.io/tests/manual/harvester-rancher/34-hotplug-unplug-volumes-in-rke1-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/34-hotplug-unplug-volumes-in-rke1-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify Steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE1 node template Provision a RKE1 cluster, check the Harvester as cloud provider Access RKE1 cluster Open charts in Apps &amp; Market page Install harvester cloud provider and CSI driver + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify Steps</h3> <ol> <li> <p>Environment preparation as above steps</p> </li> <li> <p>Import harvester to rancher from harvester settings</p> </li> <li> <p>Create cloud credential</p> </li> <li> <p>Create RKE1 node template <img src="https://user-images.githubusercontent.com/29251855/146299688-3875c18f-61d6-48e6-a15e-250d59c177ba.png" alt="image"></p> </li> <li> <p>Provision a RKE1 cluster, check the <code>Harvester</code> as cloud provider <img src="https://user-images.githubusercontent.com/29251855/146342214-568bf017-e0e2-4b3a-9f38-894eff77d439.png" alt="image"></p> 35-Hot plug and unplug volumes in RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/35-hotplug-unplug-volumes-in-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/35-hotplug-unplug-volumes-in-rke2-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify Steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE2 cluster as test case #34 Access RKE2 cluster Open charts in Apps &amp; Market page Install harvester cloud provider and CSI driver Make sure cloud provider installed complete + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify Steps</h3> <ol> <li> <p>Environment preparation as above steps</p> </li> <li> <p>Import harvester to rancher from harvester settings</p> </li> <li> <p>Create cloud credential</p> </li> <li> <p>Create RKE2 cluster as test case #34</p> </li> <li> <p>Access RKE2 cluster</p> 36-Remove Harvester LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/36-remove-harvester-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/36-remove-harvester-loadbalancer-service/ - Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Delete previous created load balancer service Expected Results Can remove load balance service correctly Service will be removed from assigned Apps + <ol> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Delete previous created load balancer service</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove load balance service correctly</li> <li>Service will be removed from assigned Apps</li> </ol> 37-Import Online Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher-copy/ - Environment Setup Setup the online harvester Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server Move to vagrant pxe harvester folder Execute vagrant ssh pxe_server Run apt-get install squid Edit /etc/squid/squid. + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the online harvester</p> <ol> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> <ol> <li>Move to vagrant pxe harvester folder</li> <li>Execute <code>vagrant ssh pxe_server</code></li> <li>Run <code>apt-get install squid</code></li> <li>Edit <code>/etc/squid/squid.conf</code> and add line</li> </ol> <pre tabindex="0"><code>http_access allow all http_port 3128 </code></pr 37-Import Online Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher/ - Environment Setup Setup the online harvester Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server Move to vagrant pxe harvester folder Execute vagrant ssh pxe_server Run apt-get install squid Edit /etc/squid/squid. + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the online harvester</p> <ol> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> <ol> <li>Move to vagrant pxe harvester folder</li> <li>Execute <code>vagrant ssh pxe_server</code></li> <li>Run <code>apt-get install squid</code></li> <li>Edit <code>/etc/squid/squid.conf</code> and add line</li> </ol> <pre tabindex="0"><code>http_access allow all http_port 3128 </code></pr 38-Import Airgapped Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/38-import-airgapped-harvester-from-airgapped-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/38-import-airgapped-harvester-from-airgapped-rancher/ - Related task: #1052 Test Air gap with Rancher integration Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1052">#1052</a> Test Air gap with Rancher integration</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 39-Standard user no Harvester Access https://harvester.github.io/tests/manual/harvester-rancher/39-rbac-standard-user-no-access/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/39-rbac-standard-user-no-access/ - As admin import/register a harvester cluster in Rancher As admin, Enable Harvester node driver As a standard user User1, login to rancher Verify User1 has no access to harvester cluster in Virtualization management page Verify User1 can not create harvester cloud credential as User1 Verify User1 can not use this cloud credential to create a node template and can not use a node driver cluster 3 and can not CRUD each resource + <ol> <li>As admin import/register a harvester cluster in Rancher</li> <li>As admin, Enable Harvester node driver</li> <li>As a standard user User1, login to rancher</li> <li>Verify User1 has no access to harvester cluster in Virtualization management page</li> <li>Verify User1 can not create harvester cloud credential as User1</li> <li>Verify User1 can not use this cloud credential to create a node template and can not use a node driver cluster 3 and can not CRUD each resource</li> </ol> 40-RBAC Add restricted admin User Harvester https://harvester.github.io/tests/manual/harvester-rancher/40-rbac-add-restricted-admin-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/40-rbac-add-restricted-admin-user-harvester/ - As admin import/register a harvester cluster in Rancher create restricted admin user rstradm verify rstradm has access to to Virturalization management page and the harvester cluster is listed Verify rstradm has access to Harvester UI through rancher by selecting it from the list in step 3 and can CRUD each resource + <ol> <li>As admin import/register a harvester cluster in Rancher</li> <li>create restricted admin user rstradm</li> <li>verify rstradm has access to to Virturalization management page and the harvester cluster is listed</li> <li>Verify rstradm has access to Harvester UI through rancher by selecting it from the list in step 3 and can CRUD each resource</li> </ol> 41-Import Harvester into nested Rancher https://harvester.github.io/tests/manual/harvester-rancher/41-rancher-nested-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/41-rancher-nested-harvester/ - Prerequisite: External network on VLAN Install Rancher in a VM using Docker method on Harvester cluster using the external VLAN Login rancher dashboard Navigate to Virtual Management Page Click import existing Copy the curl command SSH to harvester master node (user: rancher) Execute the curl command to import harvester to rancher curl --insecure -sfL https://192.168.50.82/v3/import/{identifier}.yaml | kubectl apply -f - Run sudo chmod 775 /etc/rancher/rke2/rke2.yaml to solve the permission denied error Run curl command again, you should see the following successful import message namespace/cattle-system configured serviceaccount/cattle created clusterrolebinding. + <p>Prerequisite: External network on VLAN</p> <ol> <li>Install Rancher in a VM using Docker method on Harvester cluster using the external VLAN</li> <li>Login rancher dashboard</li> <li>Navigate to Virtual Management Page</li> <li>Click import existing</li> <li>Copy the curl command <img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/08e70d37-e573-47b1-a3d6-0f3615116d48" alt="image.png"></li> <li>SSH to harvester master node (user: rancher)</li> <li>Execute the curl command to import harvester to rancher <code>curl --insecure -sfL https://192.168.50.82/v3/import/{identifier}.yaml | kubectl apply -f -</code></li> <li>Run <code>sudo chmod 775 /etc/rancher/rke2/rke2.yaml</code> to solve the permission denied error</li> <li>Run curl command again, you should see the following successful import message <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>namespace/cattle-system configured </span></span><span style="display:flex;"><span>serviceaccount/cattle created </span></span><span style="display:flex;"><span>clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created </span></span><span style="display:flex;"><span>secret/cattle-credentials-413137f created </span></span><span style="display:flex;"><span>clusterrole.rbac.authorization.k8s.io/cattle-admin created </span></span><span style="display:flex;"><span>deployment.apps/cattle-cluster-agent created </span></span><span style="display:flex;"><span>service/cattle-cluster-agent created </span></span></code></pr 42-Add cloud credential KUBECONFIG https://harvester.github.io/tests/manual/harvester-rancher/42-add-cloud-credential-kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/42-add-cloud-credential-kubeconfig/ - Prerequisite: KUBECONFIG from Harvester Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select external cluster Input KUBECONFIG from Harvester Click Create + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select external cluster</li> <li>Input KUBECONFIG from Harvester</li> <li>Click Create</li> </ol> <p><img src="https://user-images.githubusercontent.com/83787952/134994316-30438401-b80f-47a9-bbe4-122bf0a2a69f.jpg" alt="image.png"></p> 43-Scale up node driver RKE1 https://harvester.github.io/tests/manual/harvester-rancher/43-node-driver-scale-up-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/43-node-driver-scale-up-rke1/ - Prerequisite: RKE1 cluster in Harvester with at least 2 worker nodes provision a multinode cluster using harvester node driver with at least 2 worker nodes scale up a node in the cluster + <p>Prerequisite: RKE1 cluster in Harvester with at least 2 worker nodes</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale up a node in the cluster</li> </ol> 44-Scale up node driver RKE2 https://harvester.github.io/tests/manual/harvester-rancher/44-node-driver-scale-up-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/44-node-driver-scale-up-rke2/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale up a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale up a node in the cluster</li> </ol> 45-Scale down node driver RKE1 https://harvester.github.io/tests/manual/harvester-rancher/45-node-driver-scale-down-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/45-node-driver-scale-down-rke1/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale down a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale down a node in the cluster</li> </ol> 46-Scale down node driver RKE2 https://harvester.github.io/tests/manual/harvester-rancher/46-node-driver-scale-down-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/46-node-driver-scale-down-rke2/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale down a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale down a node in the cluster</li> </ol> 49-Overprovision Harvester https://harvester.github.io/tests/manual/harvester-rancher/49-overprovision-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/49-overprovision-harvester/ - import harvester into rancher over-provision the connected harvester cluster (i.e. deploy large number of nodes) note: the number will depend on the resources available in the harvester cluster you&rsquo;ve imported. i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 4 vCPU 8GB ram to over-provision CPU i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 2 vCPU 10GB ram to over-provision CPU + <ol> <li>import harvester into rancher</li> <li>over-provision the connected harvester cluster (i.e. deploy large number of nodes)</li> <li>note: the number will depend on the resources available in the harvester cluster you&rsquo;ve imported.</li> <li>i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 4 vCPU 8GB ram to over-provision CPU</li> <li>i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 2 vCPU 10GB ram to over-provision CPU</li> </ol> 50-Use fleet when a harvester cluster is imported to rancher https://harvester.github.io/tests/manual/harvester-rancher/50-fleet-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/50-fleet-with-harvester/ - deploy rancher with harvester enabled docker: &ndash;features=harvester=enabled helm: &ndash;set &rsquo;extraEnv[0].name=CATTLE_FEATURES&rsquo; &ndash;set &rsquo;extraEnv[0].value=harvester=enabled import a harvester setup go to fleet → repos -&gt; create validate that that the harvester cluster is NOT in the dropdown for cluster deployments validate that selecting the &lsquo;all clusters&rsquo; option for deployment does NOT deploy to the harvester cluster + <ol> <li>deploy rancher with harvester enabled</li> <li>docker: &ndash;features=harvester=enabled</li> <li>helm: &ndash;set &rsquo;extraEnv[0].name=CATTLE_FEATURES&rsquo; &ndash;set &rsquo;extraEnv[0].value=harvester=enabled</li> <li>import a harvester setup</li> <li>go to fleet → repos -&gt; create</li> <li>validate that that the harvester cluster is NOT in the dropdown for cluster deployments</li> <li>validate that selecting the &lsquo;all clusters&rsquo; option for deployment does NOT deploy to the harvester cluster</li> </ol> 51-Use harvester cloud provider to provision an LB - rke1 https://harvester.github.io/tests/manual/harvester-rancher/51-harvester-cloud-provider-loadbalancer-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/51-harvester-cloud-provider-loadbalancer-rke1/ - Related ticket: #1396 Integration Cloud Provider for RKE1 with Rancher Provision cluster using rke1 with harvester as the node driver Deploy cloud provider from App. Create a deployment with nginx:latest image. Create a Harvester load balancer to the pod of above deployment. Verify by clicking the service, if the load balancer is redirecting to the nginx home page. + <ul> <li>Related ticket: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <ol> <li>Provision cluster using rke1 with harvester as the node driver</li> <li>Deploy cloud provider from App.</li> <li>Create a deployment with <code>nginx:latest</code> image.</li> <li>Create a Harvester load balancer to the pod of above deployment.</li> <li>Verify by clicking the service, if the load balancer is redirecting to the nginx home page.</li> </ol> 52-Use harvester cloud provider to provision an LB - rke2 https://harvester.github.io/tests/manual/harvester-rancher/52-harvester-cloud-provider-loadbalancer-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/52-harvester-cloud-provider-loadbalancer-rke2/ - Provision cluster using rke2 with harvester as the node driver Enable the cloud driver for harvester while provisioning the cluster Create a deployment with nginx:latest image. Create a Harvester load balancer to the pod of above deployment. Verify by clicking the service, if the load balancer is redirecting to the nginx home page. + <ol> <li>Provision cluster using rke2 with harvester as the node driver</li> <li>Enable the cloud driver for <code>harvester</code> while provisioning the cluster</li> <li>Create a deployment with <code>nginx:latest</code> image.</li> <li>Create a Harvester load balancer to the pod of above deployment.</li> <li>Verify by clicking the service, if the load balancer is redirecting to the nginx home page.</li> </ol> 53-Disable Harvester flag with Harvester cluster added https://harvester.github.io/tests/manual/harvester-rancher/53-disable-harvester-flag/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/53-disable-harvester-flag/ - Pre-requisites: Rancher with Harvester imported Disable Harvester feature flag on Rancher Expected Results Harvester should show up in cluster management Virtualization management tab should be hidden. + <p>Pre-requisites: Rancher with Harvester imported</p> <ol> <li>Disable Harvester feature flag on Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester should show up in cluster management</li> <li>Virtualization management tab should be hidden.</li> </ol> 54-Import Airgapped Harvester From the Online Rancher https://harvester.github.io/tests/manual/harvester-rancher/54-import-airgapped-harvester-from-online-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/54-import-airgapped-harvester-from-online-rancher/ - Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pre><ol> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> 55-Import Harvester to Rancher in airgapped different subnet https://harvester.github.io/tests/manual/harvester-rancher/55-import-harvester-rancher-airgapped-different-subnet/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/55-import-harvester-rancher-airgapped-different-subnet/ - Environment Setup Note: Harvester and Rancher are under different subnet, can access to each other Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create ubuntu cloud image from URL Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server + <h3 id="environment-setup">Environment Setup</h3> <p><code>Note: Harvester and Rancher are under different subnet, can access to each other</code></p> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 56-Import Harvester to Rancher in airgapped different subnet https://harvester.github.io/tests/manual/harvester-rancher/56-import-harvester-rancher-online-different-subnet/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/56-import-harvester-rancher-online-different-subnet/ - Environment Setup Note: Harvester and Rancher are under different subnet, can access to each other Setup the online harvester Iso or vagrant ipxe install harvester on network with internet connection Enable vlan on harvester-mgmt Create virtual machine with name vlan1 and id: 1 Create ubuntu cloud image from URL Create virtual machine and assign vlan network, confirm can get ip address Setup the online rancher Install rancher on network with internet connection throug docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2. + <h3 id="environment-setup">Environment Setup</h3> <p><code>Note: Harvester and Rancher are under different subnet, can access to each other</code></p> <p>Setup the online harvester</p> <ol> <li>Iso or vagrant ipxe install harvester on network with internet connection</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup the online rancher</p> 57-Import airgapped harvester from airgapped rancher with Proxy https://harvester.github.io/tests/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy/ - Related task: #1052 Test Air gap with Rancher integration Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1052">#1052</a> Test Air gap with Rancher integration</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 58-Negative-Fully power cycle harvester node machine should recover RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/58-negative-fully-power-cycle-harvester-node-machine-should-recover-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/58-negative-fully-power-cycle-harvester-node-machine-should-recover-rke2-cluster/ - Related issue: #1561 Fully shutdown then power on harvester node machine can&rsquo;t get provisioned RKE2 cluster back to work Related issue: #1428 rke2-coredns-rke2-coredns-autoscaler timeout Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Step Prepare a 3 nodes harvester cluster (provo bare machine) Enable virtual network with harvester-mgmt Create vlan1 with id 1 Import harvester from rancher and create cloud credential Provision a RKE2 cluster with vlan 1 Wait for build up ready Shutdown harvester node 3 Shutdown harvester node 2 Shutdown harvester node 1 Wait for 20 minutes Power on node 1, wait 10 seconds Power on node 2, wait 10 seconds Power on node 3 Wait for harvester startup complete Wait for RKE2 cluster back to work Check node and VIP accessibility Check the rke2-coredns pod status kubectl get pods --all-namespaces | grep rke2-coredns Expected Results RKE2 cluster on harvester can recover to Active status + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1561">#1561</a> Fully shutdown then power on harvester node machine can&rsquo;t get provisioned RKE2 cluster back to work</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1428">#1428</a> rke2-coredns-rke2-coredns-autoscaler timeout</p> </li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-step">Verification Step</h2> <ol> <li>Prepare a 3 nodes harvester cluster (provo bare machine)</li> <li>Enable virtual network with harvester-mgmt</li> <li>Create vlan1 with id <code>1</code></li> <li>Import harvester from rancher and create cloud credential</li> <li>Provision a RKE2 cluster with vlan <code>1</code></li> <li>Wait for build up ready</li> <li>Shutdown harvester node 3</li> <li>Shutdown harvester node 2</li> <li>Shutdown harvester node 1</li> <li>Wait for 20 minutes</li> <li>Power on node 1, wait 10 seconds</li> <li>Power on node 2, wait 10 seconds</li> <li>Power on node 3</li> <li>Wait for harvester startup complete</li> <li>Wait for RKE2 cluster back to work</li> <li>Check node and VIP accessibility</li> <li>Check the rke2-coredns pod status <code>kubectl get pods --all-namespaces | grep rke2-coredns</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>RKE2 cluster on harvester <code>can recover</code> to <code>Active</code> status</p> 59-Create K3s Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/59-create-k3s-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/59-create-k3s-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click create and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Click Show Advanced Add the following user data: password: 123456 chpasswd: { expire: false } ssh_pwauth: true Click the drop down Kubernetes version list + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click create and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li> <p>Click Clusters</p> </li> <li> <p>Click Create</p> </li> <li> <p>Toggle RKE2/K3s</p> </li> <li> <p>Select Harvester</p> </li> <li> <p>Input <code>Cluster Name</code></p> </li> <li> <p>Select <code>default</code> namespace</p> </li> <li> <p>Select ubuntu image</p> </li> <li> <p>Select network <code>vlan1</code></p> 60-Delete K3s Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster/ - Open Cluster Management Check provisioned K3s cluster Click Delete from menu Expected Results Can remove K3s Cluster and disapper on Cluster page K3s Cluster will be removed from rancher menu under explore cluster K3s virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned K3s cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove K3s Cluster and disapper on Cluster page</li> <li>K3s Cluster will be removed from rancher menu under explore cluster</li> <li>K3s virtual machine should be also be removed from Harvester</li> </ol> 61-Deploy Harvester cloud provider to k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster/ - Related task: #1812 K3s cloud provider and csi driver support Environment Setup Docker install rancher v2.6.4 Create one node harvester with enough resource Verify steps Follow step 1~13 in tets plan 59-Create K3s Kubernetes Cluster Click the Edit yaml button Set disable-cloud-provider: true to disable default k3s cloud provider. Add cloud-provider=external to use harvester cloud provider. Create K3s cluster Download the Generate addon configuration for cloud provider Download Harvester kubeconfig and add into your local ~/. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1812">#1812</a> K3s cloud provider and csi driver support</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.4</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify steps</h3> <p>Follow step <strong>1~13</strong> in tets plan <code>59-Create K3s Kubernetes Cluster</code></p> <ol> <li>Click the Edit yaml button <img src="https://user-images.githubusercontent.com/29251855/166190410-47331a84-1d4e-4478-9d85-e68a3da91626.png" alt="image"></li> <li>Set <code>disable-cloud-provider: true</code> to disable default k3s cloud provider. <img src="https://user-images.githubusercontent.com/29251855/158510820-4d8a0021-1675-4c92-86b9-a6427f2e382b.png" alt="image"></li> <li>Add <code>cloud-provider=external</code> to use harvester cloud provider. <img src="https://user-images.githubusercontent.com/29251855/158511002-47a4a532-7f67-4eb0-8da4-074c6d9752e9.png" alt="image"></li> <li>Create K3s cluster <img src="https://user-images.githubusercontent.com/29251855/158511706-1c0c6af5-8909-4b1d-bc2a-0fa2fa26e000.png" alt="image"></li> <li>Download the <a href="https://github.com/harvester/cloud-provider-harvester/blob/master/deploy/generate_addon.sh">Generate addon configuration</a> for cloud provider</li> <li>Download Harvester kubeconfig and add into your local ~/.kube/config file</li> <li>Generate K3s kubeconfig by running generate addon script <code> ./deploy/generate_addon.sh &lt;k3s cluster name&gt; &lt;namespace&gt;</code> e.g <code>./generate_addon.sh k3s-focal-cloud-provider default</code></li> <li>Copy the kubeconfig content</li> <li>ssh to K3s VM <img src="https://user-images.githubusercontent.com/29251855/158534901-8fd22159-6a04-4592-ba25-ba4d73742a20.png" alt="image"></li> <li>Add kubeconfig content to <code>/etc/kubernetes/cloud-config</code> file, remember to align the yaml layout</li> <li>Install Harvester cloud provider <img src="https://user-images.githubusercontent.com/29251855/158512528-42ff575a-87a6-4424-bfb5-fa7af94ea74d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/158512667-18b0249c-f859-4ae4-96b7-42ce873cb97a.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can install the Harvester cloud provider on k3s cluster correctly <img src="https://user-images.githubusercontent.com/29251855/158512758-d06df2f6-7094-4d41-b960-d50b26cd23fb.png" alt="image"></li> </ol> 62-Configure the K3s "DHCP" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster Create Nginx workload for testing Create a test-nginx deployment with image nginx:latest. Add pod label test: test. Create a DHCP LoadBalancer Open Kubectl shell. Create test-dhcp-lb.yaml file. apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: dhcp name: test-dhcp-lb namespace: default spec: ports: - name: http nodePort: 30172 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer Run k apply -f test-dhcp-lb. + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> </ul> <h4 id="create-nginx-workload-for-testing">Create Nginx workload for testing</h4> <ol> <li>Create a test-nginx deployment with image nginx:latest. <img src="https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png" alt="image"></li> <li>Add pod label test: test. <img src="https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/158513105-09ab472b-7cd4-4352-b4e1-84f673ee7088.png" alt="image"></li> </ol> <h4 id="create-a-dhcp-loadbalancer">Create a DHCP LoadBalancer</h4> <ol> <li>Open Kubectl shell.</li> <li>Create <code>test-dhcp-lb.yaml</code> file. <pre tabindex="0"><code>apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: dhcp name: test-dhcp-lb namespace: default spec: ports: - name: http nodePort: 30172 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer </code></pr 62-Configure the K3s "DHCP" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/64-configure-k3s-dhcp-lb-healcheck/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/64-configure-k3s-dhcp-lb-healcheck/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster 62-Configure the K3s &ldquo;DHCP&rdquo; LoadBalancer service A Working DHCP load balancer service created on K3s cluster Edit Load balancer config Check the &ldquo;Add-on Config&rdquo; tabs Configure port, IPAM and health check related setting on Add-on Config page Expected Results Can create load balance service correctly Can route workload to nginx deployment + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> <li>62-Configure the K3s &ldquo;DHCP&rdquo; LoadBalancer service</li> </ul> <ol> <li>A <code>Working</code> DHCP load balancer service created on K3s cluster</li> <li>Edit Load balancer config</li> <li>Check the &ldquo;Add-on Config&rdquo; tabs</li> <li>Configure <code>port</code>, <code>IPAM</code> and <code>health check</code> related setting on <code>Add-on Config</code> page <img src="https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create load balance service correctly</li> <li>Can route workload to nginx deployment</li> </ol> 63-Configure the K3s "Pool" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster Create Nginx workload for testing Create a test-nginx deployment with image nginx:latest. Add pod label test: test. Create a Pool LoadBalancer Modify vip-pool in Harvester settings. Open Kubectl shell. Create test-pool-lb.yaml file. apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: pool name: test-pool-lb namespace: default spec: ports: - name: http nodePort: 32155 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer Run k apply -f test-pool-lb. + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> </ul> <h4 id="create-nginx-workload-for-testing">Create Nginx workload for testing</h4> <ol> <li>Create a test-nginx deployment with image nginx:latest. <img src="https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png" alt="image"></li> <li>Add pod label test: test. <img src="https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png" alt="image"></li> </ol> <h4 id="create-a-pool-loadbalancer">Create a Pool LoadBalancer</h4> <ol> <li> <p>Modify vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></p> </li> <li> <p>Open Kubectl shell.</p> 65-Configure the K3s "Pool" LoadBalancer health check https://harvester.github.io/tests/manual/harvester-rancher/65-configure-k3s-pool-lb-healthcheck/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/65-configure-k3s-pool-lb-healthcheck/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster 63-Configure the K3s &ldquo;Pool&rdquo; LoadBalancer service A Working DHCP load balancer service created on K3s cluster Edit Load balancer config Check the &ldquo;Add-on Config&rdquo; tabs Configure port, IPAM and health check related setting on Add-on Config page Expected Results Can create load balance service correctly Can route workload to nginx deployment + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> <li>63-Configure the K3s &ldquo;Pool&rdquo; LoadBalancer service</li> </ul> <ol> <li>A <code>Working</code> DHCP load balancer service created on K3s cluster</li> <li>Edit Load balancer config</li> <li>Check the &ldquo;Add-on Config&rdquo; tabs</li> <li>Configure <code>port</code>, <code>IPAM</code> and <code>health check</code> related setting on <code>Add-on Config</code> page <img src="https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create load balance service correctly</li> <li>Can route workload to nginx deployment</li> </ol> 66-Deploy Harvester csi driver to k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on K3s cluster Reference Document Deploying with Harvester K3s Node Driver Verify steps Prepare a Harvester cluster with enough cpu, memory and disks for K3s guest cluster Create a Rancher instance Import Harvester in Rancher and create cloud credential ssh to Harvester management node Extract the kubeconfig of Harvester with cat /etc/rancher/rke2/rke2.yaml Change the server value from https://127.0.0.1:6443/ to your VIP + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552842389">#2755</a> Steps to manually install Harvester csi-driver on K3s cluster</li> </ul> <h3 id="reference-document">Reference Document</h3> <p><a href="https://deploy-preview-309--harvester-preview.netlify.app/dev/rancher/csi-driver/#deploying-with-harvester-k3s-node-driver">Deploying with Harvester K3s Node Driver</a></p> <h3 id="verify-steps">Verify steps</h3> <ol> <li> <p>Prepare a Harvester cluster with enough cpu, memory and disks for K3s guest cluster</p> </li> <li> <p>Create a Rancher instance</p> </li> <li> <p>Import Harvester in Rancher and create cloud credential</p> </li> <li> <p>ssh to Harvester management node</p> 67-Harvester persistent volume on k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on K3s cluster Verify steps Follow test case 66-Deploy Harvester csi driver to k3s Cluster to manually install csi-driver on k3s cluster Create a nginx deployment in Workload -&gt; Deployments Create a Persistent Volume Claims, select storage class to harvester Select the Single-Node Read/Write Open Harvester Volumes page, check the corresponding volume exists Click Execute shell to access Nginx container. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552842389">#2755</a> Steps to manually install Harvester csi-driver on K3s cluster</li> </ul> <h3 id="verify-steps">Verify steps</h3> <p>Follow test case <code>66-Deploy Harvester csi driver to k3s Cluster</code> to manually install csi-driver on k3s cluster</p> <ol> <li> <p>Create a nginx deployment in Workload -&gt; Deployments</p> </li> <li> <p>Create a Persistent Volume Claims, select storage class to <code>harvester</code></p> </li> <li> <p>Select the <code>Single-Node Read/Write</code></p> </li> <li> <p>Open Harvester Volumes page, check the corresponding volume exists <img src="https://github.com/harvester/harvester/assets/29251855/8330c45f-ade1-4819-b2f0-5206e32123b6" alt="image"></p> 68-Fully airgapped rancher integrate with harvester with no proxy https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/ - Related task: #1808 RKE2 provisioning fails when Rancher has no internet access (air-gapped) Note1: in fully air gapped environment, you have to setup private docker hub registry and pull all rancher related offline image Note2: Please use SUSE SLES JeOS image, it have qemu-guest-agent already installed, thus the guest VM can get IP correctly Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting. + <ul> <li> <p>Related task: <a href="https://github.com/harvester/harvester/issues/1808">#1808</a> RKE2 provisioning fails when Rancher has no internet access (air-gapped)</p> </li> <li> <p><strong>Note1</strong>: in fully air gapped environment, you have to setup private docker hub registry and pull all rancher related offline image</p> </li> <li> <p><strong>Note2</strong>: Please use SUSE SLES JeOS image, it have <code>qemu-guest-agent</code> already installed, thus the guest VM can get IP correctly</p> </li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> 69-DHCP Harvester LoadBalancer service no health check https://harvester.github.io/tests/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters Provide Listening port and Target port Click Add-on Config + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> <ol> <li>Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters</li> <li>Provide Listening port and Target port</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873" alt="image.png"></p> 70-Pool LoadBalancer service no health check https://harvester.github.io/tests/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Access Harvester dashboard UI Go to Settings Create a vip-pool in Harvester settings. Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name Provide Listening port and Target port Click Add-on Config + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li> <p>Open <code>Global Settings</code> in hamburger menu</p> </li> <li> <p>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></p> </li> <li> <p>Change <code>ui-offline-preferred</code> to <code>Remote</code></p> </li> <li> <p>Refresh the current page (ctrl + r)</p> </li> <li> <p>Access Harvester dashboard UI</p> </li> <li> <p>Go to Settings</p> </li> <li> <p>Create a vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></p> </li> <li> <p>Open provisioned RKE2 cluster from hamburger menu</p> 71-Manually Deploy Harvester csi driver to RKE2 Cluster https://harvester.github.io/tests/manual/harvester-rancher/71-manually-deploy-csi-driver-to-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/71-manually-deploy-csi-driver-to-rke2-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on RKE2 cluster Reference Document Deploying with Harvester RKE2 Node Driver Verify steps ssh to Harvester management node Extract the kubeconfig of Harvester with cat /etc/rancher/rke2/rke2.yaml Change the server value from https://127.0.0.1:6443/ to your VIP Copy the kubeconfig and add into your local ~/.kube/config file Import Harvester in Rancher Create cloud credential Provision a RKE2 cluster Provide the login credential in user data + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552839577">#2755</a> Steps to manually install Harvester csi-driver on RKE2 cluster</li> </ul> <h3 id="reference-document">Reference Document</h3> <p><a href="https://deploy-preview-309--harvester-preview.netlify.app/dev/rancher/csi-driver/#deploying-with-harvester-rke2-node-driver">Deploying with Harvester RKE2 Node Driver</a></p> <h3 id="verify-steps">Verify steps</h3> <ol> <li> <p>ssh to Harvester management node</p> </li> <li> <p>Extract the kubeconfig of Harvester with <code>cat /etc/rancher/rke2/rke2.yaml</code></p> </li> <li> <p>Change the server value from https://127.0.0.1:6443/ to your VIP</p> </li> <li> <p>Copy the kubeconfig and add into your local ~/.kube/config file</p> 72-Use ipxe example to test fully airgapped rancher integration https://harvester.github.io/tests/manual/harvester-rancher/72-ipxe-auto-airgapped-rancher-integrate-harvester-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/72-ipxe-auto-airgapped-rancher-integrate-harvester-/ - Related task: #1808 RKE2 provisioning fails when Rancher has no internet access (air-gapped) Note1: In this test, we use vagrant-pxe-airgap-harvester to automatically provide the fully airgapped environment Note1: Compared to test case 68, we don&rsquo;t need to manually create a separate VM for the Rancher instance and docker private registry, all the prerequisite environment can be done with the vagrant-pxe-airgap-harvester solution Environment Setup Phase 1: Create airgapped Harvester cluster, Rancher and private registry Clone the latest ipxe-example which include the vagrant-pxe-airgap-harvester Follow the Sample Host Loadout and Prerequisites in readme to prepare the prerequisite package If you use Opensuse Leap operating system, you may need to comment out the following line in Vagrantfile file # libvirt. + <ul> <li> <p>Related task: <a href="https://github.com/harvester/harvester/issues/1808">#1808</a> RKE2 provisioning fails when Rancher has no internet access (air-gapped)</p> </li> <li> <p><strong>Note1</strong>: In this test, we use <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-airgap-harvester">vagrant-pxe-airgap-harvester</a> to automatically provide the fully airgapped environment</p> </li> <li> <p><strong>Note1</strong>: Compared to test case 68, we don&rsquo;t need to manually create a separate VM for the Rancher instance and docker private registry, all the prerequisite environment can be done with the <code>vagrant-pxe-airgap-harvester</code> solution</p> Adapt alertmanager to dedicated storage network https://harvester.github.io/tests/manual/_incoming/2715_adapt_alertmanager_to_dedicated_storage_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2715_adapt_alertmanager_to_dedicated_storage_network/ - Ref: https://github.com/harvester/harvester/issues/2715 criteria PVCs (alertmanager/grafana/Prometheus) will attach back after dedicated storage network switched. Verify Steps: Install Harvester with any nodes Navigate to Networks -&gt; Cluster Networks/Configs, create Cluster Network named vlan, create Network Config for all nodes Navigate to Advanced -&gt; Settings, edit storage-network Select Enable then select vlan as cluster network, fill in VLAN ID and IP Range Wait until error message (displayed under storage network setting) disappeared Navigate to Monitoring &amp; Logging -&gt; Monitoring -&gt; Configuration Dashboard of Prometheus Graph, Grafana and Altertmanager should able to access, and should contain old data. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2715">https://github.com/harvester/harvester/issues/2715</a></p> <h3 id="criteria">criteria</h3> <p>PVCs (alertmanager/grafana/Prometheus) will attach back after dedicated storage network switched.</p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan</code>, create <strong>Network Config</strong> for all nodes</li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Wait until error message (displayed under <em>storage network</em> setting) disappeared</li> <li>Navigate to <em>Monitoring &amp; Logging -&gt; Monitoring -&gt; Configuration</em></li> <li>Dashboard of Prometheus Graph, Grafana and Altertmanager should able to access, and should contain old data.</li> </ol> Add a custom "Docker Install URL" https://harvester.github.io/tests/manual/node-driver/cluster-custom-docker-install-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-docker-install-url/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Insecure Registries" https://harvester.github.io/tests/manual/node-driver/cluster-custom-insecure-registries/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-insecure-registries/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute docker info, check the &ldquo;Insecure Registries&rdquo; setting is &ldquo;harbor.wujing.site&rdquo; Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute <code>docker info</code>, check the &ldquo;Insecure Registries&rdquo; setting is &ldquo;harbor.wujing.site&rdquo;</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Registry Mirrors" https://harvester.github.io/tests/manual/node-driver/cluster-custom-registry-mirrors/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-registry-mirrors/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute &ldquo;docker info&rdquo;, check the &ldquo;Registry Mirrors&rdquo; setting is &ldquo;https://s06nkgus.mirror.aliyuncs.com&rdquo; Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute &ldquo;docker info&rdquo;, check the &ldquo;Registry Mirrors&rdquo; setting is &ldquo;<a href="https://s06nkgus.mirror.aliyuncs.com">https://s06nkgus.mirror.aliyuncs.com</a>&rdquo;</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Storage Driver" https://harvester.github.io/tests/manual/node-driver/cluster-custom-storage-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-storage-driver/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute &ldquo;docker info&rdquo;, check the Storage Driver setting is overlay Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute &ldquo;docker info&rdquo;, check the Storage Driver setting is overlay</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a network to an existing VM with only 1 network (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-only-1-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-only-1-network/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> Add a network to an existing VM with two networks https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-two-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-two-networks/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> Add a node to existing cluster (e2e_be) https://harvester.github.io/tests/manual/deployment/add-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/add-node-cluster/ - Start with harvester installer and select &lsquo;Join an existing Harvester cluster&rsquo; Provide the management ip and cluster token Expected Results On completion, Harvester should show the same management url as of existing node and status as ready. Check the host section, the joined node must appear + <ol> <li>Start with harvester installer and select &lsquo;Join an existing Harvester cluster&rsquo;</li> <li>Provide the management ip and cluster token</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion, Harvester should show the same management url as of existing node and status as ready.</li> <li>Check the host section, the joined node must appear</li> </ol> Add backup-taget connection status https://harvester.github.io/tests/manual/_incoming/2631_add_backup-taget_connection_status/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2631_add_backup-taget_connection_status/ - Ref: https://github.com/harvester/harvester/issues/2631 Verified this feature has been implemented. Test Information Environment: qemu/KVM 2 nodes Harvester Version: master-032742f0-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings Setup a invalid NFS/S3 backup-target, then click Test connection button, error message should displayed Setup a valid NFS/S3 backup-target, then click Test connection button, notify message should displayed Navigate to Advanced/VM Backups, notify message should NOT displayed Navigate to Advanced/Settings and stop the backup-target server, then navigate to Advanced/VM Backups, error message should displayed + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2631">https://github.com/harvester/harvester/issues/2631</a></p> <p>Verified this feature has been implemented.</p> <p><img src="https://user-images.githubusercontent.com/5169694/190369936-c07b0a5f-8685-4813-8108-1032caf09183.png" alt="image"></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 2 nodes</strong></li> <li>Harvester Version: <strong>master-032742f0-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Advanced/Settings</em></li> <li>Setup a invalid NFS/S3 backup-target, then click <strong>Test connection</strong> button, error message should displayed</li> <li>Setup a valid NFS/S3 backup-target, then click <strong>Test connection</strong> button, notify message should displayed</li> <li>Navigate to <em>Advanced/VM Backups</em>, notify message should NOT displayed</li> <li>Navigate to <em>Advanced/Settings</em> and stop the backup-target server, then navigate to <em>Advanced/VM Backups</em>, error message should displayed</li> </ol> Add cluster driver https://harvester.github.io/tests/manual/node-driver/add-cluster-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/add-cluster-driver/ - Cluster Management &gt; Drivers &gt; Node Drivers Click &ldquo;Add Node driver&rdquo; Add the correct configuration and save Expected Results Created successfully, status is active + <ol> <li>Cluster Management &gt; Drivers &gt; Node Drivers</li> <li>Click &ldquo;Add Node driver&rdquo;</li> <li>Add the correct configuration and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Created successfully, status is active</li> </ol> Add extra disks by using raw disks https://harvester.github.io/tests/manual/_incoming/extra-disk-using-raw-disk/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/extra-disk-using-raw-disk/ - Prepare a disk (with WWN) and attach it to the node. Navigate to &ldquo;Host&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo; and open the dropdown menu &ldquo;Add disks&rdquo;. Choose a disk to add, e.g. /dev/sda but not /dev/sda1. Expected Results The raw disk shall be schedulable as a longhorn disk as a whole (without any partition). Ths raw disk shall be in provisioned phase. Reboot the host and the disk shall be reattached and added back as a longhorn disk. + <ol> <li>Prepare a disk (with WWN) and attach it to the node.</li> <li>Navigate to &ldquo;Host&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo; and open the dropdown menu &ldquo;Add disks&rdquo;.</li> <li>Choose a disk to add, e.g. <code>/dev/sda</code> but not <code>/dev/sda1</code>.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The raw disk shall be schedulable as a longhorn disk as a whole (without any partition).</li> <li>Ths raw disk shall be in <code>provisioned</code> phase.</li> <li>Reboot the host and the disk shall be reattached and added back as a longhorn disk.</li> </ol> Add Labels (e2e_be_fe) https://harvester.github.io/tests/manual/images/add-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/add-labels/ - Add multiple labels to the images. Click save Expected Results Labels should be added successfully + <ol> <li>Add multiple labels to the images.</li> <li>Click save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Labels should be added successfully</li> </ol> Add multiple Networks via form https://harvester.github.io/tests/manual/network/add-multiple-networks-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-multiple-networks-form/ - Create a new VM via the web form Add both a management network and an external VLAN network Validate both interfaces exist in the VM ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should create You should see three interfaces listed in VM You should get responses from pinging the VM You should get responses from pinging the VM + <ol> <li>Create a new VM via the web form</li> <li>Add both a management network and an external VLAN network</li> <li>Validate both interfaces exist in the VM <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should create</li> <li>You should see three interfaces listed in VM</li> <li>You should get responses from pinging the VM</li> <li>You should get responses from pinging the VM</li> </ol> Add multiple Networks via YAML (e2e_be) https://harvester.github.io/tests/manual/network/add-multiple-networks-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-multiple-networks-yaml/ - Create a new VM via YAML Add both a management network and an external VLAN network Validate both interfaces exist in the VM ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should create You should see three interfaces listed in VM You should get responses from pinging the VM You should get responses from pinging the VM + <ol> <li>Create a new VM via YAML</li> <li>Add both a management network and an external VLAN network</li> <li>Validate both interfaces exist in the VM <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should create</li> <li>You should see three interfaces listed in VM</li> <li>You should get responses from pinging the VM</li> <li>You should get responses from pinging the VM</li> </ol> Add network reachability detection from host for the VLAN network https://harvester.github.io/tests/manual/network/add-network-reachability-detection-from-host-for-vlan-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-network-reachability-detection-from-host-for-vlan-network/ - Related issue: #1476 Add network reachability detection from host for the VLAN network Category: Network Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Steps Enable virtual network with harvester-mgmt in harvester Create VLAN 806 with id 806 and set to default auto mode Import harvester to rancher 1 .Create cloud credential Provision a rke2 cluster to harvester Deploy a nginx server workload Open Service Discover -&gt; Services + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1476">#1476</a> Add network reachability detection from host for the VLAN network</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable virtual network with <code>harvester-mgmt</code> in harvester</li> <li>Create VLAN 806 with id <code>806</code> and set to default <code>auto</code> mode</li> <li>Import harvester to rancher 1 .Create cloud credential</li> <li>Provision a rke2 cluster to harvester <img src="https://user-images.githubusercontent.com/29251855/145564732-0a3cee15-a264-407f-800a-df2e7c649846.png" alt="image"></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/145564961-c921f341-2c88-44cc-9c5e-08789e594552.png" alt="image"></p> Add the different roles to the cluster https://harvester.github.io/tests/manual/node-driver/q-cluster-different-roles/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-different-roles/ - Create three users user1, user2, user3 Give the roles of Cluster Owner to user1, Create Project to user2 and Cluster Member to user3 respectively. Login with these three roles Expected Results + <ol> <li>Create three users user1, user2, user3</li> <li>Give the roles of Cluster Owner to user1, Create Project to user2 and Cluster Member to user3 respectively.</li> <li>Login with these three roles</li> </ol> <h2 id="expected-results">Expected Results</h2> Add VLAN network (e2e_be) https://harvester.github.io/tests/manual/network/add-vlan-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-vlan-network/ - Environment setup This should be done on a Harvester setup with at least 2 NICs and at least 2 nodes. This is easily tested in Vagrant Verification Steps Open settings on a harvester cluster Navigate to the VLAN settings page Click Enabled Check dropdown for NICs and verify that percentage is showing 100% Add the NIC Click Save Validate that it has updated in settings Expected Results You should be able to add the VLAN network device You should see in the settings list that it has your new default NIC + <h2 id="environment-setup">Environment setup</h2> <p>This should be done on a Harvester setup with at least 2 NICs and at least 2 nodes. This is easily tested in Vagrant</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open settings on a harvester cluster</li> <li>Navigate to the VLAN settings page</li> <li>Click Enabled</li> <li>Check dropdown for NICs and verify that percentage is showing 100%</li> <li>Add the NIC</li> <li>Click Save</li> <li>Validate that it has updated in settings</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to add the VLAN network device</li> <li>You should see in the settings list that it has your new default NIC</li> </ol> Add websocket disconnect notification https://harvester.github.io/tests/manual/_incoming/2186_add_websocket_disconnect_notification/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2186_add_websocket_disconnect_notification/ - Ref: https://github.com/harvester/harvester/issues/2186 Verify Steps: Install Harvester with at least 2 nodes Login to Dashboard via Node IP Navigate to Advanced/Settings and update ui-index to https://releases.rancher.com/harvester-ui/dashboard/release-harvester-v1.0/index.html and force refresh to make it applied. restart the Node which holding the IP Notification of websocket disconnected should appeared + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2186">https://github.com/harvester/harvester/issues/2186</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177529443-a9478e33-a955-4b48-8485-ab6eabbf3824.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Login to Dashboard via Node IP</li> <li>Navigate to <em>Advanced/Settings</em> and update <strong>ui-index</strong> to <code>https://releases.rancher.com/harvester-ui/dashboard/release-harvester-v1.0/index.html</code> and force refresh to make it applied.</li> <li>restart the Node which holding the IP</li> <li>Notification of websocket disconnected should appeared</li> </ol> Add/remove a node in the created harvester cluster https://harvester.github.io/tests/manual/node-driver/cluster-add-remove-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-add-remove-node/ - add/remove a node in the created harvester cluster Expected Results rancher on the cluster modified successfully harvester corresponding VM node added/removed successfully + <ol> <li>add/remove a node in the created harvester cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>rancher on the cluster modified successfully</li> <li>harvester corresponding VM node added/removed successfully</li> </ol> Add/remove disk to Host config https://harvester.github.io/tests/manual/hosts/1623-add-disk-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1623-add-disk-to-host/ - Related issues: #1623 Unable to add additional disks to host config Environment setup Add Disk that isn&rsquo;t assigned to host Verification Steps Head to &ldquo;Hosts&rdquo; page Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab Validate: Open dropdown and see no disks Attach a disk on that node Validate: Open dropdown and see some disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Detach a disk on that node Validate: Open dropdown and see no disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Expected Results Disk space should show appropriately + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1623">#1623</a> Unable to add additional disks to host config</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Add Disk that isn&rsquo;t assigned to host</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Head to &ldquo;Hosts&rdquo; page</li> <li>Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab</li> <li>Validate: Open dropdown and see no disks</li> <li>Attach a disk on that node</li> <li>Validate: Open dropdown and see some disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> <li>Detach a disk on that node</li> <li>Validate: Open dropdown and see no disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disk space should show appropriately <img src="https://user-images.githubusercontent.com/83787952/146289651-3c8b8da7-5ba1-4a15-aa4f-32f24af4b8dc.png" alt="image"></li> </ol> Add/remove disk to Host config https://harvester.github.io/tests/manual/volumes/1623-add-disk-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1623-add-disk-to-host/ - Related issues: #1623 Unable to add additional disks to host config Environment setup Add Disk that isn&rsquo;t assigned to host Verification Steps Head to &ldquo;Hosts&rdquo; page Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab Validate: Open dropdown and see no disks Attach a disk on that node Validate: Open dropdown and see some disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Detach a disk on that node Validate: Open dropdown and see no disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Expected Results Disk space should show appropriately + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1623">#1623</a> Unable to add additional disks to host config</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Add Disk that isn&rsquo;t assigned to host</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Head to &ldquo;Hosts&rdquo; page</li> <li>Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab</li> <li>Validate: Open dropdown and see no disks</li> <li>Attach a disk on that node</li> <li>Validate: Open dropdown and see some disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> <li>Detach a disk on that node</li> <li>Validate: Open dropdown and see no disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disk space should show appropriately <img src="https://user-images.githubusercontent.com/83787952/146289651-3c8b8da7-5ba1-4a15-aa4f-32f24af4b8dc.png" alt="image"></li> </ol> Additional trusted CA configure-ability https://harvester.github.io/tests/manual/deployment/additional-trusted-ca/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/additional-trusted-ca/ - Ref: https://github.com/harvester/harvester/issues/1260 Verify Items Image download with self-signed additional-ca VM backup with self-signed additional-ca Case: Image downlaod Install Harvester with ipxe-example which includes https://github.com/harvester/ipxe-examples/pull/36 Upload any valid iso to pxe-server&rsquo;s /var/www/ Use Browser to access https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt; should be valid Add self-signed cert to Harvester Navigate to Harvester Advanced Settings, edit additional-ca cert content can be retrieved in pxe-server /etc/ssl/certs/nginx-selfsigned.crt Create Image with the same URL https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt; Image should be downloaded Case: VM backup Install Harvester with ipxe-example setup Minio in pxe-server follow instruction to download binary and start the service login to UI console then add region and create bucket follow instruction to generate self-signed cert with IP SANs restart service with self-signed cert Add self-signed cert to Harvester Add local Minio info as S3 into backup-target Backup-Target Should not pop up any Error Message Create Image for VM creation Create VM with any resource Perform VM backup VM&rsquo;s data Should be backup into Minio&rsquo;s folder + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1260">https://github.com/harvester/harvester/issues/1260</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Image download with self-signed additional-ca</li> <li>VM backup with self-signed additional-ca</li> </ul> <h3 id="case-image-downlaod">Case: Image downlaod</h3> <ol> <li>Install Harvester with ipxe-example which includes <a href="https://github.com/harvester/ipxe-examples/pull/36">https://github.com/harvester/ipxe-examples/pull/36</a></li> <li>Upload any valid iso to <strong>pxe-server</strong>&rsquo;s <code>/var/www/</code></li> <li>Use Browser to access <code>https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt;</code> should be valid</li> <li>Add self-signed cert to Harvester <ul> <li>Navigate to Harvester <em>Advanced Settings</em>, edit <em>additional-ca</em></li> <li>cert content can be retrieved in pxe-server <code>/etc/ssl/certs/nginx-selfsigned.crt</code></li> </ul> </li> <li>Create Image with the same URL <code>https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt;</code></li> <li>Image should be downloaded</li> </ol> <h3 id="case-vm-backup">Case: VM backup</h3> <ol> <li>Install Harvester with ipxe-example</li> <li>setup <strong>Minio</strong> in pxe-server <ul> <li>follow <a href="https://docs.min.io/docs/minio-quickstart-guide.html">instruction</a> to download binary and start the service</li> <li>login to UI console then add region and create bucket</li> <li>follow <a href="https://docs.min.io/docs/how-to-secure-access-to-minio-server-with-tls.html#using-open-ssl">instruction</a> to generate self-signed cert with IP SANs</li> <li>restart service with self-signed cert</li> </ul> </li> <li>Add self-signed cert to Harvester</li> <li>Add local <strong>Minio</strong> info as S3 into <strong>backup-target</strong></li> <li>Backup-Target Should not pop up any Error Message</li> <li>Create Image for VM creation</li> <li>Create VM with any resource</li> <li>Perform VM backup</li> <li>VM&rsquo;s data Should be backup into <strong>Minio</strong>&rsquo;s folder</li> </ol> Agent Node should not rely on specific master Node https://harvester.github.io/tests/manual/hosts/agent_node_connectivity/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/agent_node_connectivity/ - Ref: https://github.com/harvester/harvester/issues/1521 Verify Items Agent Node should keep connection when any master Node is down Case: Agent Node&rsquo;s connecting status Install Harvester with 4 nodes which joining node MUST join by VIP (point server-url to use VIP) Make sure all nodes are ready Login to dashboard, check host state become Active SSH to the 1st node, run command kubectl get node to check all STATUS should be Ready SSH to agent nodes which ROLES IS &lt;none&gt; in Step 2i&rsquo;s output Output should contains VIP in the server URL, by run command cat /etc/rancher/rke2/config. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1521">https://github.com/harvester/harvester/issues/1521</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Agent Node should keep connection when any master Node is down</li> </ul> <h2 id="case-agent-nodes-connecting-status">Case: Agent Node&rsquo;s connecting status</h2> <ol> <li>Install Harvester with 4 nodes which joining node MUST join by VIP (point <code>server-url</code> to use VIP)</li> <li>Make sure all nodes are ready <ol> <li>Login to dashboard, check host <strong>state</strong> become <code>Active</code></li> <li>SSH to the 1st node, run command <code>kubectl get node</code> to check all <strong>STATUS</strong> should be <code>Ready</code></li> </ol> </li> <li>SSH to agent nodes which <strong>ROLES</strong> IS <code>&lt;none&gt;</code> in <strong>Step 2i</strong>&rsquo;s output <ul> <li><input checked="" disabled="" type="checkbox"> Output should contains VIP in the server URL, by run command <code>cat /etc/rancher/rke2/config.yaml.d/90-harvester-vip.yaml</code></li> <li><input checked="" disabled="" type="checkbox"> Output should contain the line <code>server: https://127.0.0.1:6443</code>, by run command <code>cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig</code></li> <li><input checked="" disabled="" type="checkbox"> Output should contain the line <code>server: https://127.0.0.1:6443</code>, by run command <code>cat /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig</code></li> </ul> </li> <li>SSH to server nodes which <strong>ROLES</strong> contains <code>control-plane</code> in <strong>Step 2i</strong>&rsquo;s output <ul> <li><input checked="" disabled="" type="checkbox"> Check file should not exist in the path <code>/etc/rancher/rke2/config.yaml.d/90-harvester-vip.yaml</code></li> </ul> </li> <li>Shut down a server node, check following things <ul> <li><input checked="" disabled="" type="checkbox"> Host <strong>State</strong> should not be <code>Active</code> in dashboard</li> <li><input checked="" disabled="" type="checkbox"> Node <strong>STATUS</strong> should be <code>NotReady</code> in the command output of <code>kubectl get node</code></li> <li><input checked="" disabled="" type="checkbox"> <strong>STATUS</strong> of agent nodes should be <code>Ready</code> in the command output of <code>kubectl get node</code></li> </ul> </li> <li>Power on the server node, wait until it back to cluster</li> <li>repeat <strong>Step 5-6</strong> for other server nodes</li> </ol> Alertmanager supports main stream receivers https://harvester.github.io/tests/manual/_incoming/2521-alertmanager-supports-main-stream-receivers/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2521-alertmanager-supports-main-stream-receivers/ - Related issues: #2521 [FEATURE] Alertmanager supports main stream receivers Category: Alter manager Verification Steps Prepare another VM or machine have the same subnet with the Harvester Prepare a webhook server on the VM, reference to https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md You may need to install python3 web package, refer to https://webpy.org/install Run export PORT=8094 on the webhook server VM Launch the webhook server python3 simple-webhook-server.py davidtclin@ubuntu-clean:~$ python3 simple-webhook-server.py usage: export PORT=1234 to set http server port number as 1234 start a simple webhook server, PORT 8094 @ 2022-09-21 16:39:58. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2521">#2521</a> [FEATURE] Alertmanager supports main stream receivers</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Alter manager</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare another VM or machine have the same subnet with the Harvester</li> <li>Prepare a webhook server on the VM, reference to <a href="https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md">https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md</a></li> <li>You may need to install python3 web package, refer to <a href="https://webpy.org/install">https://webpy.org/install</a></li> <li>Run <code>export PORT=8094</code> on the webhook server VM</li> <li>Launch the webhook server <code>python3 simple-webhook-server.py</code> <pre tabindex="0"><code>davidtclin@ubuntu-clean:~$ python3 simple-webhook-server.py usage: export PORT=1234 to set http server port number as 1234 start a simple webhook server, PORT 8094 @ 2022-09-21 16:39:58.706792 http://0.0.0.0:8094/ </code></pr All Namespace filtering in VM list https://harvester.github.io/tests/manual/_incoming/2578-all-namespace-filtering/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2578-all-namespace-filtering/ - Related issues: #2578 [BUG] When first entering the harvester cluster from Virtualization Managements, some vm&rsquo;s in namespace are not shown in the list Category: UI Verification Steps Create a harvester cluster Create a VM in the default namespace Creating a Namespace (eg: test-vm) Import the Harvester cluster in Rancher access to the harvester cluster from Virtualization Management click Virtual Machines tab Expected Results test-vm-1 should also be shown in the list + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2578">#2578</a> [BUG] When first entering the harvester cluster from Virtualization Managements, some vm&rsquo;s in namespace are not shown in the list</li> </ul> <h2 id="category">Category:</h2> <ul> <li>UI</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster</li> <li>Create a VM in the default namespace</li> <li>Creating a Namespace (eg: test-vm)</li> <li>Import the Harvester cluster in Rancher</li> <li>access to the harvester cluster from Virtualization Management</li> <li>click Virtual Machines tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>test-vm-1 should also be shown in the list <img src="https://user-images.githubusercontent.com/24985926/181211867-4f3889cd-a14e-463c-9a7f-0aee2d5f358e.png" alt="image"></li> </ol> allow users to create cloud-config template on the VM creating page https://harvester.github.io/tests/manual/templates/allow-users-to-create-cloud-config-template-on-vm-creating-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/allow-users-to-create-cloud-config-template-on-vm-creating-page/ - Related issues: #1433 allow users to create cloud-config template on the VM creating page Category: Virtual Machine Verification Steps Create a new virtual machine Click advanced options Drop down user data template -&gt; create new Drop down network data template -&gt; create new Expected Results User can create user and network data template when create virtual machine Created cloud-init template template can be saved and auto selected to the latest one + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1433">#1433</a> allow users to create cloud-config template on the VM creating page</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new virtual machine</li> <li>Click advanced options</li> <li>Drop down user data template -&gt; create new</li> <li>Drop down network data template -&gt; create new</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User can create user and network data template when create virtual machine <img src="https://user-images.githubusercontent.com/29251855/139009117-9c191986-2253-4bff-b73f-962eabe2b2d9.png" alt="image"> Created cloud-init template</li> <li>template can be saved and auto selected to the latest one <img src="https://user-images.githubusercontent.com/29251855/139008946-97f0d528-c5b9-4add-82d9-4105bd51f0c5.png" alt="image"></li> </ol> Attach unpartitioned NVMe disks to host https://harvester.github.io/tests/manual/hosts/attach-unpartitioned-nvme-disks-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/attach-unpartitioned-nvme-disks-to-host/ - Related issues: #1414 Adding unpartitioned NVMe disks fails Category: Storage Verification Steps Use qemu-img create -f qcow2 command to create three disk image locally Shutdown target node VM machine Directly edit VM xml content in virt manager page Add to the first line Add the following line before the end of quote &lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme301.img,if=none,id=D22&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D22,serial=1234&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme302.img,if=none,id=D23&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D23,serial=1235&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme303. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1414">#1414</a> Adding unpartitioned NVMe disks fails</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Use <code>qemu-img create -f qcow2</code> command to create three disk image locally</li> <li>Shutdown target node VM machine</li> <li>Directly edit VM xml content in virt manager page</li> <li>Add <!-- raw HTML omitted --> to the first line</li> <li>Add the following line before the end of <!-- raw HTML omitted --> quote</li> </ol> <pre tabindex="0"><code>&lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme301.img,if=none,id=D22&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D22,serial=1234&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme302.img,if=none,id=D23&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D23,serial=1235&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme303.img,if=none,id=D24&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D24,serial=1236&#34;/&gt; &lt;/qemu:commandline&gt; </code></pr Authentication Validation https://harvester.github.io/tests/manual/authentication/general-authentication/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/general-authentication/ - Enable Access Control . Choose “Allow any valid User” as “Site Access”. Make sure any user is able to access the site. Enable Access Control . Choose “Restrict to Specific User” and add few users. Make sure only the specified users have access to the server. Others should get authentication error. Enable Access Control . Choose “Restrict to Specific User” and add a group. Make sure only all users belonging to the group have access to the server Others should get authentication error. + <ol> <li>Enable Access Control . Choose “Allow any valid User” as “Site Access”. Make sure any user is able to access the site.</li> <li>Enable Access Control . Choose “Restrict to Specific User” and add few users. Make sure only the specified users have access to the server. Others should get authentication error.</li> <li>Enable Access Control . Choose “Restrict to Specific User” and add a group. Make sure only all users belonging to the group have access to the server Others should get authentication error.</li> <li>Log in as a normal user (who has access to server but not a member of any environment) , a new default environment should be created for this user with this user account being the owner of the environment.Account entry should get created for this 1. user.</li> <li>Log in as a normal user (who has access to server) , create a new environment. This user should become the owner of the environment.</li> <li>As owner of environment , Add a user as “member” of an environment.Make sure this user gets access to this environment.</li> <li>As owner of environment , Add a user as “owner” of an environment.Make sure this user gets access to this environment. User should also have ability to manage this environment which is to add/delete member of the environment.</li> <li>As owner of environment , Add a group as “member” of an environment.Make sure that all users that belong to this group get access to the environment.</li> <li>As owner of environment , Add a group as “owner” of an environment.Make sure all users of the group gets access to this environment. User should also have ability to manage this environment which is to add/delete member of the environment.</li> <li>As owner of environment , change the role of a member of the environment from “owner” to “member”. Make sure his access control reflects this change.</li> <li>As owner of environment , change the role of a member of the environment from “member” to “owner”. Make sure his access control reflects this change.</li> <li>As owner of environment, remove an existing “owner” member of the environment.Make sure this user does not have access to environment anymore.</li> <li>As owner of environment, remove an existing “member” member of the environment.Make sure this user does not have access to environment anymore.</li> <li>As owner of environment, deactivate the environment. Members of the environment should have no access to environment. Owners should only be able to see in their manage environments but not list of active environments.</li> <li>As owner of environment, Activate a deactivated environment. Members of the environment should now have access to environment.</li> <li>As owner of environment, delete a deactivated environment.Members of the environment should not have access to environment. All hosts relating to the environment should be purged (only hosts created through docker-machine). Custom hosts will not be purged.</li> <li>As admin user, deactivate an existing account. Account should have no access to rancher server.</li> <li>As admin user, activate a deactivated account.Account should get back access to rancher server.</li> <li>As admin user, delete an existing account. Once account is purged, make sure that account is not a member of environments.</li> <li>Log in as a deleted account when account is still not purged.User should have no access to rancher server (like in deactivated state).</li> <li>Log in as a deleted account when account is purged.When user tries to log in , a new account entry will get created and it will not have any access to any existing environment this account had access to before the account was deleted.</li> <li>Delete a user that is a member of the project. List the member of the project , it should return the deleted as member of the project but should reflect the user as “unknown user”.</li> <li>As member user of environment, trying to add a member to environment should fail.</li> <li>As member user of environment, trying to delete an existing member to the environment should fail</li> <li>As member user of environment, trying to deactivate an environment should fail</li> <li>As member user of environment, trying to delete an environment should fail.</li> <li>As member user of environment, trying to change the role of an existing member to the environment should fail.</li> <li>As admin user, change account type of existing &ldquo;user&rdquo; to &ldquo;admin&rdquo;. Check that they have access to &ldquo;Admin&rdquo; tab.</li> </ol> <h2 id="special-characters-relating-test-cases">Special Characters relating test cases:</h2> <ol> <li>User name having special characters ( In this case user DN will have special characters)</li> <li>Group name having special characters( In this case group DN will have special characters)</li> <li>Password having special characters</li> </ol> <p>Test the above 3 test cases , by having &ldquo;required&rdquo; site access set for user/group as applicable.</p> Auto provision lots of extra disks https://harvester.github.io/tests/manual/_incoming/large-amount-of-extra-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/large-amount-of-extra-disks/ - :warning: This is a heuristic test plan since real world race condition is hard to reproduce. If you find any better alternative, feel free to update. This test is better to perform under QEMU/libvirt environment. Related issues: #1718 [BUG] Automatic disk provisioning result in unusable ghost disks on NVMe drives Category: Storage Verification Steps Create a harvester cluster and attach 10 or more extra disks (needs WWN so that they can be identified uniquely). + <blockquote> <p>:warning: This is a heuristic test plan since real world race condition is hard to reproduce. If you find any better alternative, feel free to update.</p> <p>This test is better to perform under QEMU/libvirt environment.</p> </blockquote> <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1718">#1718</a> [BUG] Automatic disk provisioning result in unusable ghost disks on NVMe drives</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster and attach 10 or more extra disks (needs WWN so that they can be identified uniquely).</li> <li>Add <a href="https://docs.harvesterhci.io/v1.0/settings/settings/#auto-disk-provision-paths-experimental"><code>auto-disk-provision-paths</code></a> setting and provide a value that matches all the disks added from previous step.</li> <li>Wait for minutes for the auto-provisioning process.</li> <li>Eventually, all disks matching the pattern should be partitioned, formatted and mounted successfully.</li> <li>Navigate to longhorn dashboard to see if each disk is successfully added and scheduled.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>A large amout of disks can be auto-provisioned simultaneously.</li> </ol> Automatically get VIP during PXE installation https://harvester.github.io/tests/manual/deployment/1410-pxe-installation-automatically-get-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1410-pxe-installation-automatically-get-vip/ - Related issues: #1410 Support getting VIP automatically during PXE boot installation Verification Steps Comment vip and vip_hw_addr in ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 Start vagrant-pxe-harvester Run kubectl get cm -n harvester-system vip Check whether we can get ip and hwAddress in it Run ip a show harvester-mgmt Check whether there are two IPs in it and one is the vip. Expected Results VIP should automatically be assigned + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1410">#1410</a> Support getting VIP automatically during PXE boot installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Comment <code>vip</code> and <code>vip_hw_addr</code> in <code>ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2</code></li> <li>Start vagrant-pxe-harvester</li> <li>Run <code>kubectl get cm -n harvester-system vip</code> <ul> <li>Check whether we can get <code>ip</code> and <code>hwAddress</code> in it</li> </ul> </li> <li>Run <code>ip a show harvester-mgmt</code> <ul> <li>Check whether there are two IPs in it and one is the vip.</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should automatically be assigned</li> </ol> Automatically get VIP during PXE installation https://harvester.github.io/tests/manual/hosts/1410-pxe-installation-automatically-get-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1410-pxe-installation-automatically-get-vip/ - Related issues: #1410 Support getting VIP automatically during PXE boot installation Verification Steps Comment vip and vip_hw_addr in ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 Start vagrant-pxe-harvester Run kubectl get cm -n harvester-system vip Check whether we can get ip and hwAddress in it Run ip a show harvester-mgmt Check whether there are two IPs in it and one is the vip. Expected Results VIP should automatically be assigned + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1410">#1410</a> Support getting VIP automatically during PXE boot installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Comment <code>vip</code> and <code>vip_hw_addr</code> in <code>ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2</code></li> <li>Start vagrant-pxe-harvester</li> <li>Run <code>kubectl get cm -n harvester-system vip</code> <ul> <li>Check whether we can get <code>ip</code> and <code>hwAddress</code> in it</li> </ul> </li> <li>Run <code>ip a show harvester-mgmt</code> <ul> <li>Check whether there are two IPs in it and one is the vip.</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should automatically be assigned</li> </ol> Backup and restore of harvester cluster https://harvester.github.io/tests/manual/node-driver/q-cluster-backup-restore/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-backup-restore/ - create a deployment in harvester cluster Go to the rancher&rsquo;s cluster list and make a backup of the harvester cluster After the backup is complete, delete the deployment created in the harvester cluster go to the list of clusters in the rancher and restore the harvester cluster Expected Results + <ol> <li>create a deployment in harvester cluster</li> <li>Go to the rancher&rsquo;s cluster list and make a backup of the harvester cluster</li> <li>After the backup is complete, delete the deployment created in the harvester cluster</li> <li>go to the list of clusters in the rancher and restore the harvester cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> Backup S3 reduce permissions https://harvester.github.io/tests/manual/backup-and-restore/backup_s3_permission/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup_s3_permission/ - Ref: https://github.com/harvester/harvester/issues/1339 Verify Items Backup target connect to S3 should only require the permission to access the specific bucket Case: S3 Backup with single-bucket-user Install Harvester with any nodes Setup Minio then follow the instruction to create a single-bucket-user. Create specific bucket for the user Create other buckets setup backup-target with the single-bucket-user permission When assign the dedicated bucket (for the user), connection should success. When assign other buckets, connection should failed with AccessDenied error message + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1339">https://github.com/harvester/harvester/issues/1339</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Backup target connect to S3 should only require the permission to access the specific bucket</li> </ul> <h2 id="case-s3-backup-with-single-bucket-user">Case: S3 Backup with <code>single-bucket-user</code></h2> <ol> <li>Install Harvester with any nodes</li> <li>Setup Minio <ol> <li>then follow the <a href="https://objectivefs.com/howto/how-to-restrict-s3-bucket-policy-to-only-one-aws-s3-bucket">instruction</a> to create a <code>single-bucket-user</code>.</li> <li>Create specific bucket for the user</li> <li>Create other buckets</li> </ol> </li> <li>setup <code>backup-target</code> with the <strong>single-bucket-user</strong> permission <ol> <li>When assign the dedicated bucket (for the user), connection should success.</li> <li>When assign other buckets, connection should failed with <strong>AccessDenied</strong> error message</li> </ol> </li> </ol> Backup Single VM (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm/ - Click take backup in virtual machine list Expected Results Backup should be created Backup should be listed in backups list Backup should be available on remote storage (S3/NFS) + <ol> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be created</li> <li>Backup should be listed in backups list</li> <li>Backup should be available on remote storage (S3/NFS)</li> </ol> Backup Single VM that has been live migrated before (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-that-has-been-live-migrated/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-that-has-been-live-migrated/ - Click take backup in virtual machine list Expected Results Backup should be created Backup should be listed in backups list Backup should be available on remote storage (S3/NFS) + <ol> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be created</li> <li>Backup should be listed in backups list</li> <li>Backup should be available on remote storage (S3/NFS)</li> </ol> Backup single VM with node off https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-node-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-node-off/ - On multi-node setup bring down node that is hosting VM Click take backup in virtual machine list Expected Results The backup should complete successfully Comments We do allow taking backup even if the VM is down, as you can take backup when the VM is off, this is because the volume still exists with longhorn&rsquo;s multi replicas, but weneed to check the data integrity. Known Bugs https://github.com/harvester/harvester/issues/1483 + <ol> <li>On multi-node setup bring down node that is hosting VM</li> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The backup should complete successfully</li> </ol> <h2 id="comments">Comments</h2> <p>We do allow taking backup even if the VM is down, as you can take backup when the VM is off, this is because the volume still exists with longhorn&rsquo;s multi replicas, but weneed to check the data integrity.</p> Backup Target error message https://harvester.github.io/tests/manual/backup-and-restore/backup_target_errmsg/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup_target_errmsg/ - Ref: https://github.com/harvester/harvester/issues/1051 Verify Items Backup target should check input before Click Save Error message should displayed on edit page when input is wrong Case: Connect to invalid Backup Target Install Harvester with any node Login to dashboard, then navigate to Advanced Settings Edit backup-target,then input invalid data for NFS/S3 and click Save The Page should not be redirect to Advanced Settings Error Message should displayed under Save button + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1051">https://github.com/harvester/harvester/issues/1051</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Backup target should check input before Click <strong>Save</strong></li> <li>Error message should displayed on edit page when input is wrong</li> </ul> <h2 id="case-connect-to-invalid-backup-target">Case: Connect to invalid Backup Target</h2> <ol> <li>Install Harvester with any node</li> <li>Login to dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <strong>backup-target</strong>,then input invalid data for NFS/S3 and click <strong>Save</strong></li> <li>The Page should not be redirect to <strong>Advanced Settings</strong></li> <li>Error Message should displayed under <strong>Save</strong> button</li> </ol> Basic functional verification of Harvester cluster after creation https://harvester.github.io/tests/manual/node-driver/verify-cluster-functionality/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/verify-cluster-functionality/ - create the project. deploy deployment Expected Results The project is created successfully Deployment successfully deployed + <ol> <li>create the project.</li> <li>deploy deployment</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The project is created successfully</li> <li>Deployment successfully deployed</li> </ol> Boot installer under Legacy BIOS and UEFI https://harvester.github.io/tests/manual/_incoming/2023-boot-installer-legacy-and-uefi/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2023-boot-installer-legacy-and-uefi/ - Related issues #2023 Legacy Iso for older servers Verification Steps BIOS Test Build harvester-installer Boot build artifact using BIOS Legacy mode: qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64 Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo; UEFI Test Build harvester-installer (or use the same one from the BIOS Test, it&rsquo;s a hybrid ISO) Boot build artifact using UEFI mode: qemu-system-x86_64 -m 2048 -cdrom . + <ul> <li>Related issues <a href="https://github.com/harvester/harvester/issues/2023">#2023</a> Legacy Iso for older servers</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="bios-test">BIOS Test</h3> <ol> <li>Build <a href="https://github.com/harvester/harvester-installer">harvester-installer</a></li> <li>Boot build artifact using BIOS Legacy mode: <code>qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64</code></li> <li>Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo;</li> </ol> <h3 id="uefi-test">UEFI Test</h3> <ol> <li>Build <a href="https://github.com/harvester/harvester-installer">harvester-installer</a> (or use the same one from the BIOS Test, it&rsquo;s a hybrid ISO)</li> <li>Boot build artifact using UEFI mode: <code>qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64 -bios /usr/share/qemu/ovmf-x86_64.bin</code> (OVMF is a port of the UEFI firmware to qemu)</li> <li>Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo;</li> </ol> Button of `Download KubeConfig` (e2e_fe) https://harvester.github.io/tests/manual/misc/download_kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/download_kubeconfig/ - Ref: https://github.com/harvester/harvester/issues/1349 Verify Items Download KubeConfig should not exist in general views Download Kubeconfig should exist in Support page Downloaded file should be named with suffix .yaml Case: Download KubeConfig navigate to every pages to make sure download kubeconfig icon will not appear in header section navigate to support page to check Download KubeConfig is work normally + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1349">https://github.com/harvester/harvester/issues/1349</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Download KubeConfig should not exist in general views</li> <li>Download Kubeconfig should exist in Support page</li> <li>Downloaded file should be named with suffix <code>.yaml</code></li> </ul> <h2 id="case-download-kubeconfig">Case: Download KubeConfig</h2> <ul> <li>navigate to every pages to make sure download kubeconfig icon will not appear in header section</li> <li>navigate to support page to check <code>Download KubeConfig</code> is work normally</li> </ul> Chain VM templates and images https://harvester.github.io/tests/manual/templates/760-chained-vm-templates/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/760-chained-vm-templates/ - Related issues: #760 cloud config byte limit Verification Steps Create a vm and add userData or networkData, test if it works Run VM health checks create a vm template and add userData create a new vm and use the template Run VM health checks use the existing vm to generate a template, then use the template to create a new vm Run VM health Checks Expected Results All VM&rsquo;s should create All VM Health Checks should pass + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/760">#760</a> cloud config byte limit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a vm and add userData or networkData, test if it works</li> <li>Run VM health checks</li> <li>create a vm template and add userData create a new vm and use the template</li> <li>Run VM health checks</li> <li>use the existing vm to generate a template, then use the template to create a new vm</li> <li>Run VM health Checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All VM&rsquo;s should create</li> <li>All VM Health Checks should pass</li> </ol> Chain VM templates and images https://harvester.github.io/tests/manual/virtual-machines/760-chained-vm-templates/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/760-chained-vm-templates/ - Related issues: #760 cloud config byte limit Verification Steps Create a vm and add userData or networkData, test if it works Run VM health checks create a vm template and add userData create a new vm and use the template Run VM health checks use the existing vm to generate a template, then use the template to create a new vm Run VM health Checks Expected Results All VM&rsquo;s should create All VM Health Checks should pass + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/760">#760</a> cloud config byte limit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a vm and add userData or networkData, test if it works</li> <li>Run VM health checks</li> <li>create a vm template and add userData create a new vm and use the template</li> <li>Run VM health checks</li> <li>use the existing vm to generate a template, then use the template to create a new vm</li> <li>Run VM health Checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All VM&rsquo;s should create</li> <li>All VM Health Checks should pass</li> </ol> Change api-ui-source bundled (e2e_fe) https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-bundled/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-bundled/ - Log in as admin Navigate to advanced settings Change api-ui-source to bundled Save Refresh page Check page source for dashboard loading location Expected Results Log in should complete Settings should save dashboard location should be loading from /dashboard/_nuxt/ (verify it in browser&rsquo;s developers tools) + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Change api-ui-source to bundled</li> <li>Save</li> <li>Refresh page</li> <li>Check page source for dashboard loading location</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Log in should complete</li> <li>Settings should save</li> <li>dashboard location should be loading from <code>/dashboard/_nuxt/</code> <ul> <li>(verify it in browser&rsquo;s developers tools)</li> </ul> </li> </ol> Change api-ui-source external (e2e_fe) https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-external/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-external/ - Log in as admin Navigate to advanced settings Change api-ui-source to external Save Refresh page Check page source for dashboard loading location Expected Results Log in should complete Settings should save dashboard location should be loading from https://releases.rancher.com/harvester-ui/latest (verify it in browser&rsquo;s developers tools) + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Change api-ui-source to external</li> <li>Save</li> <li>Refresh page</li> <li>Check page source for dashboard loading location</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Log in should complete</li> <li>Settings should save</li> <li>dashboard location should be loading from <a href="https://releases.rancher.com/harvester-ui/latest">https://releases.rancher.com/harvester-ui/latest</a></li> <li>(verify it in browser&rsquo;s developers tools)</li> </ol> Change DNS servers while installing https://harvester.github.io/tests/manual/deployment/1590-change-dns-server-for-install/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1590-change-dns-server-for-install/ - Related issues: #1590 Harvester installer can&rsquo;t resolve hostnames Known Issues When supplying multiple ip=&hellip; kernel cmdline arguments, only one of them will be configured by dracut, therefore only the configured interface would have ifcfg generated. So for now, we can&rsquo;t support multiple ip=&hellip; kernel cmdline arguments Verification Steps Because configuring the network of the installation environment only works with PXE installation, you could use ipxe-examples/vagrant-pxe-harvester/ to set it up. Be sure you can run setup_harvester. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1590">#1590</a> Harvester installer can&rsquo;t resolve hostnames</li> </ul> <h2 id="known-issues">Known Issues</h2> <p>When supplying multiple ip=&hellip; kernel cmdline arguments, only one of them will be configured by dracut, therefore only the configured interface would have ifcfg generated. So for now, we can&rsquo;t support multiple ip=&hellip; kernel cmdline arguments</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Because configuring the network of the installation environment only works with PXE installation, you could use ipxe-examples/vagrant-pxe-harvester/ to set it up. Be sure you can run setup_harvester.sh without any problem.</p> Change DNS settings on vagrant-pxe-harvester install https://harvester.github.io/tests/manual/deployment/ipxe-dns-change/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/ipxe-dns-change/ - Install using ipxe-examples Also change harvester_network_config.dns_servers in the settings.yml for the vagrant environment before deploy. This will change the DNS in the harvester OS config. If you also want to change the DNS for everything in the DHCP scope change harvester_network_config.dhcp_server.dns_server. Expected Results On completion of the installation, Harvester should provide the management url and show status. SSH into one of the nodes. If you use the default configuration you can use ssh rancher@192. + <ul> <li>Install using <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">ipxe-examples</a></li> <li>Also change <code>harvester_network_config.dns_servers</code> in the <code>settings.yml</code> for the vagrant environment before deploy. This will change the DNS in the harvester OS config.</li> <li>If you also want to change the DNS for everything in the DHCP scope change <code>harvester_network_config.dhcp_server.dns_server</code>.</li> </ul> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>SSH into one of the nodes. If you use the default configuration you can use <code>ssh rancher@192.168.0.30</code>.</li> <li>When you run <code>cat /etc/resolv.conf</code> the changed DNS records should show up</li> </ol> Change log level debug https://harvester.github.io/tests/manual/advanced/change-log-level-debug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-debug/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Debug Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Debug level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Debug</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Debug level output</li> </ol> Change log level Info (e2e_fe) https://harvester.github.io/tests/manual/advanced/change-log-level-info/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-info/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Info Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Info level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Info</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Info level output</li> </ol> Change log level Trace (e2e_fe) https://harvester.github.io/tests/manual/advanced/change-log-level-trace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-trace/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Trace Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Trace level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Trace</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Trace level output</li> </ol> Change user password (e2e_fe) https://harvester.github.io/tests/manual/authentication/1409-change-password/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/1409-change-password/ - Related issues: #1409 There&rsquo;s no way to change user password in single cluster UI Verification Steps Logged in with user Changed password Logged out Logged back in with new password Verified old password didn&rsquo;t work Expected Results Password should change and be accepted on new login Old password shouldn&rsquo;t work + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1409">#1409</a> There&rsquo;s no way to change user password in single cluster UI</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Logged in with user</li> <li>Changed password</li> <li>Logged out</li> <li>Logged back in with new password</li> <li>Verified old password didn&rsquo;t work</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Password should change and be accepted on new login</li> <li>Old password shouldn&rsquo;t work</li> </ol> Check can apply the resource quota limit to project and namespace https://harvester.github.io/tests/manual/harvester-rancher/check-can-apply-the-resource-quota-limit-to-project-and-namespace-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/check-can-apply-the-resource-quota-limit-to-project-and-namespace-/ - Related issues: #1454 Incorrect memory unit conversion in namespace resource quota Category: Rancher Integration Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Verification Steps Access Rancher dashboard Open Cluster management -&gt; Explore the active cluster Create a new project test-1454-proj in Projects/Namespaces Set resource quota for the project Memory Limit: Project Limit: 512 Namespace default limit: 256 Memory Reservation: Project Limit: 256 Namespace default limit: 128 Click create namespace test-1454-ns under project test-1454-proj Click Kubectl Shell and run the following command kubectl get ns kubectl get quota -n test-1454-ns Check the output Click Workload -&gt; Deployments -&gt; Create Given the Name, Namespace and Container image Click Create Expected Results Based on configured project resource limit and namespace default limit, + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1454">#1454</a> Incorrect memory unit conversion in namespace resource quota</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access Rancher dashboard</li> <li>Open Cluster management -&gt; Explore the active cluster</li> <li>Create a new project <code>test-1454-proj</code> in Projects/Namespaces</li> <li>Set resource quota for the project</li> </ol> <ul> <li>Memory Limit: <ul> <li>Project Limit: 512</li> <li>Namespace default limit: 256</li> </ul> </li> <li>Memory Reservation: <ul> <li>Project Limit: 256</li> <li>Namespace default limit: 128</li> </ul> </li> </ul> <ol> <li>Click create namespace <code>test-1454-ns</code> under project <code>test-1454-proj</code></li> <li>Click <code>Kubectl Shell</code> and run the following command</li> </ol> <ul> <li>kubectl get ns</li> <li>kubectl get quota -n test-1454-ns</li> </ul> <ol> <li>Check the output</li> <li>Click <code>Workload</code> -&gt; <code>Deployments</code> -&gt; <code>Create</code></li> <li>Given the <code>Name</code>, <code>Namespace</code> and <code>Container image</code> <img src="https://user-images.githubusercontent.com/29251855/143847775-eb84fa49-54d5-4001-a210-cbd8ed1235d1.png" alt="image"></li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Based on configured project resource limit and namespace default limit,</p> Check can start VM after Harvester upgrade https://harvester.github.io/tests/manual/_incoming/start-vm-after-harvester-upgrade-complete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/start-vm-after-harvester-upgrade-complete/ - Related issues: #2270 [BUG] Unable start VM after upgraded v1.0.1 to v1.0.2-rc2 Category: Harvester Upgrade Verification Steps Prepare the previous stable Harvester release cluster Create image Enable Network and create VM Create several virtual machine Follow the official document steps to prepare the online or offline upgrade Shutdown all virtual machines Start the upgrade Confirm all the upgrade process complete Start all the virtual machines Expected Results All virtual machine could be correctly started and work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2270">#2270</a> [BUG] Unable start VM after upgraded v1.0.1 to v1.0.2-rc2</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Harvester release cluster</li> <li>Create image</li> <li>Enable Network and create VM</li> <li>Create several virtual machine</li> <li>Follow the <a href="https://docs.harvesterhci.io/v1.0/upgrade/automatic/">official document steps</a> to prepare the online or offline upgrade</li> <li>Shutdown all virtual machines</li> <li>Start the upgrade</li> <li>Confirm all the upgrade process complete</li> <li>Start all the virtual machines</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>All virtual machine could be correctly started and work as expected</li> </ul> Check conditions when stop/pause VM https://harvester.github.io/tests/manual/_incoming/1987-failure-message-in-stopping-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1987-failure-message-in-stopping-vm/ - Related issues: #1987 Verification Steps Stop Request should not have failure message Create a VM with runStrategy: RunStrategyAlways. Stop the VM. Check there is no Failure attempting to delete VMI: &lt;nil&gt; in VM status. UI should not show pause message Create a VM. Pause the VM. Although the message The status of pod readliness gate &quot;kubevirt.io/virtual-machine-unpaused&quot; is not &quot;True&quot;, but False is in the VM condition, UI should not show it. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1987">#1987</a></li> </ul> <h2 id="verification-steps">Verification Steps</h2> <p>Stop Request should not have failure message</p> <ol> <li>Create a VM with <code>runStrategy: RunStrategyAlways</code>.</li> <li>Stop the VM.</li> <li>Check there is no <code>Failure attempting to delete VMI: &lt;nil&gt;</code> in VM status.</li> </ol> <p>UI should not show pause message</p> <ol> <li>Create a VM.</li> <li>Pause the VM.</li> <li>Although the message <code>The status of pod readliness gate &quot;kubevirt.io/virtual-machine-unpaused&quot; is not &quot;True&quot;, but False</code> is in the VM condition, UI should not show it.</li> </ol> Check crash dump when there's a kernel panic https://harvester.github.io/tests/manual/hosts/1357-kernel-panic-check-crash-dump/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1357-kernel-panic-check-crash-dump/ - Related issues: #1357 Crash dump not written when kernel panic occurs Verification Steps Created new single node cluster with 16GB RAM Booted into debug mode from GRUB entry Created several VMs triggered kernel panic with echo c &gt;/proc/sysrq-trigger Waited for reboot Verified that dump was saved in /var/crash Expected Results dump should be saved in /var/crash + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1357">#1357</a> Crash dump not written when kernel panic occurs</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Created new single node cluster with 16GB RAM</li> <li>Booted into debug mode from GRUB entry</li> <li>Created several VMs</li> <li>triggered kernel panic with <code>echo c &gt;/proc/sysrq-trigger</code></li> <li>Waited for reboot</li> <li>Verified that dump was saved in <code>/var/crash</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>dump should be saved in <code>/var/crash</code></li> </ol> Check default and customized project and namespace details page https://harvester.github.io/tests/manual/harvester-rancher/check-default-customized-project-and-namespace-details-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/check-default-customized-project-and-namespace-details-page/ - Related issue: #1574 Multi-cluster projectNamespace details page error Category: Rancher Integration Environment setup Install rancher 2.6.3 by docker docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Verification Steps Import harvester from rancher dashboard Access harvester from virtualization management page Create several new projects Create several new namespaces under each new projects Access all default and self created namespace Check can display namespace details Check all new namespaces can display correctly under each projects Expected Results Access harvester from rancher virtualization management page Click any namespace in the Projects/Namespace can display details correctly with no page error Default namespace Customized namespace Newly created namespace will display under project list + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1574">#1574</a> Multi-cluster projectNamespace details page error</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install rancher <code>2.6.3</code> by docker</li> </ol> <pre tabindex="0"><code>docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import harvester from rancher dashboard</li> <li>Access harvester from virtualization management page</li> <li>Create several new projects</li> <li>Create several new namespaces under each new projects</li> <li>Access all default and self created namespace</li> <li>Check can display namespace details</li> <li>Check all new namespaces can display correctly under each projects</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Access harvester from rancher virtualization management page Click any namespace in the Projects/Namespace can display details correctly with no page error</li> </ol> <p>Default namespace <img src="https://user-images.githubusercontent.com/29251855/143835124-6f81b902-e0b1-4cbd-8e1f-e818ee033fdb.png" alt="image"></p> check detailed network status in host page https://harvester.github.io/tests/manual/hosts/check-detailed-network-status-in-host-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/check-detailed-network-status-in-host-page/ - Related issues: #531 Better error messages when misconfiguring multiple nics Category: Host Verification Steps Enable vlan cluster network setting and set a default network interface Wait a while for the setting take effect on all harvester nodes Click nodes on host page Check the network tab Expected Results On the Host view page, now we can see detailed network status including Name, Type, IP Address, Status etc.. Check all network interface can display Check the Name, Type, IP Address, Status display correct values + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/531">#531</a> Better error messages when misconfiguring multiple nics</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable vlan cluster network setting and set a default network interface</li> <li>Wait a while for the setting take effect on all harvester nodes</li> <li>Click nodes on host page</li> <li>Check the network tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>On the Host view page, now we can see detailed network status including <code>Name</code>, <code>Type</code>, <code>IP Address</code>, <code>Status</code> etc.. <img src="https://user-images.githubusercontent.com/29251855/141070311-55ec4382-d777-4289-91c7-cebe81db3356.png" alt="image"></p> Check DNS on install with Github SSH keys https://harvester.github.io/tests/manual/_incoming/1903-dns-github-ssh-keys/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1903-dns-github-ssh-keys/ - Related issues: #1903 DNS server not available during install Verification Steps Without PXE Start a new install Set DNS as 8.8.8.8 Add in github SSH keys Finish install SSH into node with SSH keys from github (rancher@hostname) Verify login was successful With PXE Got vagrant setup from https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Changed settings.yml DHCP config and added dns: 8.8.8.8 dhcp_server: ip: 192.168.0.254 subnet: 192.168.0.0 netmask: 255.255.255.0 range: 192.168.0.50 192.168.0.130 dns: 8.8.8.8 https: false Also changed ssh_authorized_keys and commented out default SSH key and added username for github + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1903">#1903</a> DNS server not available during install</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="without-pxe">Without PXE</h3> <ol> <li>Start a new install</li> <li>Set DNS as <code>8.8.8.8</code></li> <li>Add in github SSH keys</li> <li>Finish install</li> <li>SSH into node with SSH keys from github (<code>rancher@hostname</code>)</li> <li>Verify login was successful</li> </ol> <h3 id="with-pxe">With PXE</h3> <ol> <li>Got vagrant setup from <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Changed <code>settings.yml</code> DHCP config and added <code>dns: 8.8.8.8</code></li> </ol> <pre tabindex="0"><code>dhcp_server: ip: 192.168.0.254 subnet: 192.168.0.0 netmask: 255.255.255.0 range: 192.168.0.50 192.168.0.130 dns: 8.8.8.8 https: false </code></pr Check favicon and title on pages https://harvester.github.io/tests/manual/misc/1520-check-title-and-favicon/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1520-check-title-and-favicon/ - Related issues: #1520 incorrect title and favicon Verification Steps Log into Harvester Check page title and favicon on each of these pages dashboard main page settings support Volumes SSH Keys Host info Expected Results Harvester favicon and title should show on each page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1520">#1520</a> incorrect title and favicon</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Log into Harvester</li> <li>Check page title and favicon on each of these pages <ul> <li>dashboard</li> <li>main page</li> <li>settings</li> <li>support</li> <li>Volumes</li> <li>SSH Keys</li> <li>Host info</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester favicon and title should show on each page</li> </ol> Check Harvester CloudInit CRDs within Harvester, Terraform & Rancher https://harvester.github.io/tests/manual/misc/3902-elemental-cloud-init-harvester-crds/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/3902-elemental-cloud-init-harvester-crds/ - Related issues: #3902 support elemental cloud-init via harvester-node-manager Testing With Terraform TBD Testing From Harvester UI TBD Testing From Rancher Fleet UI / Harvester Fleet Controller TBD Testing w/ Harvester Kubeconfig via Kubectl &amp; K9s (or similar tool) Pre-Reqs: Have an available multi-node Harvester cluster, w/out your ssh-key present on any nodes Provision cluster however is easiest K9s (or other similar kubectl tooling) kubectl audit elemental toolkit for an understanding of stages audit harvester configuration to correlate properties to elemental-toolkit based stages / functions Negative Tests: Validate Non-YAML Files Get . + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3902">#3902</a> support elemental cloud-init via harvester-node-manager</li> </ul> <h2 id="testing-with-terraform">Testing With Terraform</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-from-harvester-ui">Testing From Harvester UI</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-from-rancher-fleet-ui--harvester-fleet-controller">Testing From Rancher Fleet UI / Harvester Fleet Controller</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-w-harvester-kubeconfig-via-kubectl--k9s-or-similar-tool">Testing w/ Harvester Kubeconfig via Kubectl &amp; K9s (or similar tool)</h2> <h3 id="pre-reqs">Pre-Reqs:</h3> <ul> <li>Have an available multi-node Harvester cluster, w/out your ssh-key present on any nodes</li> <li>Provision cluster however is easiest</li> <li>K9s (or other similar kubectl tooling)</li> <li>kubectl</li> <li>audit <a href="https://rancher.github.io/elemental-toolkit/docs/customizing/stages">elemental toolkit</a> for an understanding of stages</li> <li>audit <a href="https://docs.harvesterhci.io/v1.2/install/harvester-configuration">harvester configuration</a> to correlate properties to elemental-toolkit based stages / functions</li> </ul> <h3 id="negative-tests">Negative Tests:</h3> <h4 id="validate-non-yaml-files-get-yaml-as-suffix-on-file-system">Validate Non-YAML Files Get .yaml as Suffix On File-System</h4> <ol> <li>Prepare a YAML loadout of a CloudInit resource that takes the shape of:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">node.harvesterhci.io/v1beta1</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">CloudInit</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>: </span></span><span style="display:flex;"><span> <span style="color:#f92672">name</span>: <span style="color:#ae81ff">write-file-with-non-yaml-filename</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>: </span></span><span style="display:flex;"><span> <span style="color:#f92672">matchSelector</span>: {} </span></span><span style="display:flex;"><span> <span style="color:#f92672">filename</span>: <span style="color:#ae81ff">99_filewrite.log</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">contents</span>: |<span style="color:#e6db74"> </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> stages: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> fs: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> - name: &#34;write file test&#34; </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> commands: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> - echo &#34;hello, there&#34; &gt; /etc/sillyfile.conf</span> </span></span></code></pr Check IPAM configuration with IPAM https://harvester.github.io/tests/manual/_incoming/1697-ipam-load-balancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1697-ipam-load-balancer/ - Related issues: #1697 Optimization for the Harvester load balancer Verification Steps Install the latest rancher and import a Harvester cluster Create a cluster by Harvester node driver Navigate to the workload Page, create a workload Click &ldquo;Add ports&rdquo;, select type as LB, protocol as TCP Check IPAM selector Navigate to the service page, create a LB Click &ldquo;Add-on config&rdquo; tab and check IPAM and port + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1697">#1697</a> Optimization for the Harvester load balancer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install the latest rancher and import a Harvester cluster</li> <li>Create a cluster by Harvester node driver</li> <li>Navigate to the workload Page, create a workload</li> <li>Click &ldquo;Add ports&rdquo;, select type as LB, protocol as TCP</li> <li>Check IPAM selector</li> <li>Navigate to the service page, create a LB</li> <li>Click &ldquo;Add-on config&rdquo; tab and check IPAM and port <img src="https://user-images.githubusercontent.com/83787952/152212105-2b2335be-b12b-42ac-bfcf-aa1d2aeb6fd3.png" alt="image.png"> <img src="https://user-images.githubusercontent.com/83787952/152212109-039a3e23-9eae-4ffc-9318-58f048a112c1.png" alt="image.png"></li> </ol> Check IPv4 static method in ISO installer https://harvester.github.io/tests/manual/_incoming/2796-check-ipv4-static-method-in-iso-installer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2796-check-ipv4-static-method-in-iso-installer/ - Related issues: #2796 [BUG] configure network failed if use static mode Category: Newtork Harvester Installer Verification Steps Use latest ISO to install Enter VLAN field with empty 1 1000 choose static method fill other fields press enter to the next page no error found, and show DNS config page Expected Results During Harvester ISO installer We can configure VLAN network on the static mode with the following settings: No error message blocked Can proceed to dns config page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2796">#2796</a> [BUG] configure network failed if use static mode</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Newtork</li> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Use latest ISO to install</li> <li>Enter VLAN field with <ul> <li>empty</li> <li>1</li> <li>1000</li> </ul> </li> <li>choose static method</li> <li>fill other fields</li> <li>press enter to the next page</li> <li>no error found, and show DNS config page</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>During Harvester ISO installer We can configure VLAN network on the static mode with the following settings:</p> Check logs on Harvester https://harvester.github.io/tests/manual/_incoming/2528-check-logs-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2528-check-logs-harvester/ - Related issues: #2528 [BUG] Tons of AppArmor denied messages Category: Logging Environment Setup This should be run on a Harvester node that has been up for a while and has been in use Verification Steps SSH to harvester node Execute journalctl -b -f Look through logs and verify that there isn&rsquo;t anything generating lots of erroneous messages Expected Results There shouldn&rsquo;t be large volumes of erroneous messages + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2528">#2528</a> [BUG] Tons of AppArmor denied messages</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Logging</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>This should be run on a Harvester node that has been up for a while and has been in use</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH to harvester node</li> <li>Execute <code>journalctl -b -f</code></li> <li>Look through logs and verify that there isn&rsquo;t anything generating lots of erroneous messages</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>There shouldn&rsquo;t be large volumes of erroneous messages</li> </ol> Check Longhorn volume mount point https://harvester.github.io/tests/manual/hosts/1667-check-longhorn-volume-mount/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1667-check-longhorn-volume-mount/ - Related issues: #1667 data partition is not mounted to the LH path properly Verification Steps Install Harvester node in VM from ISO Check partitions with lsblk -f Verify mount point of /var/lib/longhorn Expected Results Mount point should show /var/lib/longhorn + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1667">#1667</a> data partition is not mounted to the LH path properly</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester node in VM from ISO</li> <li>Check partitions with <code>lsblk -f</code></li> <li>Verify mount point of <code>/var/lib/longhorn</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Mount point should show <code>/var/lib/longhorn</code> <img src="https://user-images.githubusercontent.com/83787952/146290004-0584f817-d9df-4f4d-9069-d3ed4199b30f.png" alt="image"></li> </ol> Check Longhorn volume mount point https://harvester.github.io/tests/manual/volumes/1667-check-longhorn-volume-mount/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1667-check-longhorn-volume-mount/ - Related issues: #1667 data partition is not mounted to the LH path properly Verification Steps Install Harvester node in VM from ISO Check partitions with lsblk -f Verify mount point of /var/lib/longhorn Expected Results Mount point should show /var/lib/longhorn + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1667">#1667</a> data partition is not mounted to the LH path properly</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester node in VM from ISO</li> <li>Check partitions with <code>lsblk -f</code></li> <li>Verify mount point of <code>/var/lib/longhorn</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Mount point should show <code>/var/lib/longhorn</code> <img src="https://user-images.githubusercontent.com/83787952/146290004-0584f817-d9df-4f4d-9069-d3ed4199b30f.png" alt="image"></li> </ol> Check Network interface link status can match the available NICs in Harvester vlanconfig https://harvester.github.io/tests/manual/_incoming/2988-check-network-link-match-vlanconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2988-check-network-link-match-vlanconfig/ - Related issues: #2988 [BUG] Network interface link status judgement did not match the available NICs in Harvester vlanconfig Category: Network Verification Steps Create cluster network cn1 Create a vlanconfig config-n1 on cn1 which applied to node 1 only Select an available NIC on the Uplink Create a vlan, the cluster network cn1 vlanconfig and provide valid vlan id 91 Edit config-n1, Check NICs list in Uplink ssh to node 1 + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2988">#2988</a> [BUG] Network interface link status judgement did not match the available NICs in Harvester vlanconfig</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create cluster network <code>cn1</code> <img src="https://user-images.githubusercontent.com/29251855/196580297-57541544-48f5-4492-b3e9-a3450697f490.png" alt="image"></p> </li> <li> <p>Create a vlanconfig <code>config-n1</code> on <code>cn1</code> which applied to node 1 only <img src="https://user-images.githubusercontent.com/29251855/196580491-0572c539-5828-4f2e-a0a6-59b40fcc549b.png" alt="image"></p> </li> <li> <p>Select an available NIC on the Uplink <img src="https://user-images.githubusercontent.com/29251855/196580574-d38d59de-251c-4cf8-885d-655b76a78659.png" alt="image"></p> </li> <li> <p>Create a vlan, the cluster network <code>cn1</code> vlanconfig and provide valid vlan id <code>91</code> <img src="https://user-images.githubusercontent.com/29251855/196584602-b663ca69-da9a-42e3-94e0-41e094ff1d0b.png" alt="image"></p> Check rancher-monitoring-grafana volume size https://harvester.github.io/tests/manual/_incoming/2282-check-rancher-monitoring-grafana-volume-size/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2282-check-rancher-monitoring-grafana-volume-size/ - Related issues: #2282 [BUG] rancher-monitoring-grafana is too small and it keeps growing Category: Monitoring Verification Steps Harvester cluster running after 24 hours Access Harvester Longhorn dashboard via https:///dashboard/c/local/longhorn Open the Longhorn UI Open the volume page Check the rancher-monitoring-grafana size and usage Shutdown a management node machine Power on the management node machine Wait for 60 minutes Check the rancher-monitoring-grafana size and usage in Longhorn UI Shutdown all management node machines in sequence Power on all management node machines in sequence Wait for 60 minutes Check the rancher-monitoring-grafana size and usage in Longhorn UI Expected Results The rancher-monitoring-grafana default allocated with 2Gi and Actual usage 108 Mi after running after 24 hours Turn off then turn on the specific vip harvester node machine, the The rancher-monitoring-grafana keep stable in 107 Mi after turning on 60 minutes Turn off then turn on all four harvester node machines, the The rancher-monitoring-grafana keep stable in 107 Mi after turning on 60 minutes + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2282">#2282</a> [BUG] rancher-monitoring-grafana is too small and it keeps growing</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Monitoring</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Harvester cluster running after 24 hours</li> <li>Access Harvester Longhorn dashboard via https://<!-- raw HTML omitted -->/dashboard/c/local/longhorn</li> <li>Open the Longhorn UI</li> <li>Open the volume page</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage</li> <li>Shutdown a management node machine</li> <li>Power on the management node machine</li> <li>Wait for 60 minutes</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage in Longhorn UI</li> <li>Shutdown all management node machines in sequence</li> <li>Power on all management node machines in sequence</li> <li>Wait for 60 minutes</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage in Longhorn UI</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The <code>rancher-monitoring-grafana</code> default allocated with <code>2Gi</code> and Actual usage <code>108 Mi</code> after running after 24 hours <img src="https://user-images.githubusercontent.com/29251855/191000121-9c3c640e-7d7f-4d1b-84f6-39745abca0ce.png" alt="image"></p> Check redirect for editing server URL setting https://harvester.github.io/tests/manual/hosts/1489-redirect-for-server-url-setting/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1489-redirect-for-server-url-setting/ - Related issues: #1489 Edit Advanced Setting option server-url will redirect to inappropriate page Verification Steps Install harvester Access harvester Edit server-url form settings Check server-url save, cancel, and back. Additional context: Expected Results URL should stay the same when navigating + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1489">#1489</a> Edit Advanced Setting option server-url will redirect to inappropriate page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install harvester</li> <li>Access harvester</li> <li>Edit server-url form settings</li> <li>Check server-url save, cancel, and back. Additional context: <img src="https://user-images.githubusercontent.com/18737885/140492691-969380aa-dbed-4999-9e90-e589dd93e4e4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>URL should stay the same when navigating</li> </ol> Check support bundle for SLE Micro OS https://harvester.github.io/tests/manual/_incoming/2420-2464-check-support-bundle-sle-micro-os/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2420-2464-check-support-bundle-sle-micro-os/ - Related issues: #2420 [FEATURE] support bundle: support SLE Micro OS Related issues: #2464 [backport v1.0] [FEATURE] support bundle: support SLE Micro OS Category: Support Bundle Verification Steps Download support bundle in support page Extract the support bundle, check every file have content ssh to harvester node Check the /etc/os-release file content Expected Results Check can download support bundle correctly, check can access every file without empty Checked every harvester nodes, the ID have changed to sle-micro-rancher + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2420">#2420</a> [FEATURE] support bundle: support SLE Micro OS</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2464">#2464</a> [backport v1.0] [FEATURE] support bundle: support SLE Micro OS</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Support Bundle</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download support bundle in support page</li> <li>Extract the support bundle, check every file have content</li> <li>ssh to harvester node</li> <li>Check the /etc/os-release file content</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check can download support bundle correctly, check can access every file without empty</p> Check that you can communicate with the Harvester cluster https://harvester.github.io/tests/manual/terraformer/harvester-cluster-communicate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/harvester-cluster-communicate/ - Set the KUBECONFIG env variable with the path of your kubeconfig file Try to import any resource to test the connectivity with the Harvester cluster For instance, try to import ssh-key with: terraformer import harvester -r ssh_key Expected Results You should see: terraformer import harvester -r ssh_key 2021/08/04 15:18:59 harvester importing... ssh_key 2021/08/04 15:18:59 harvester done importing ssh_key ... And the generated files should appear in ./generated/harvester/ssh_key/ + <ol> <li>Set the KUBECONFIG env variable with the path of your kubeconfig file</li> <li>Try to import any resource to test the connectivity with the Harvester cluster For instance, try to import ssh-key with: <code>terraformer import harvester -r ssh_key</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>You should see:</p> <pre tabindex="0"><code>terraformer import harvester -r ssh_key 2021/08/04 15:18:59 harvester importing... ssh_key 2021/08/04 15:18:59 harvester done importing ssh_key ... </code></pr Check the OS types in Advanced Options https://harvester.github.io/tests/manual/_incoming/2776-check-os-types-in-advanced-options/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2776-check-os-types-in-advanced-options/ - Related issues: #2776 [FEATURE] remove some dead OS types Category: Network Verification Steps Login harvester dashboard Open the VM create page, check the OS type list Open the image create page, check the OS type list Open the template create page, check the OS type list Expected Results The following OS types should be removed from list Turbolinux Mandriva Xandros In v1.1.0 master we add the SUSE Linux Enterprise in the VM creation page In the image create page In the template create page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2776">#2776</a> [FEATURE] remove some dead OS types</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Open the VM create page, check the OS type list</li> <li>Open the image create page, check the OS type list</li> <li>Open the template create page, check the OS type list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The following OS types should be removed from list</p> <ul> <li>Turbolinux</li> <li>Mandriva</li> <li>Xandros</li> </ul> </li> <li> <p>In v1.1.0 master we add the <code>SUSE Linux Enterprise</code> in the VM creation page <img src="https://user-images.githubusercontent.com/29251855/190973269-764e425f-20be-4cb1-8334-e7af668a7798.png" alt="image"></p> Check the VM is available when Harvester upgrade failed https://harvester.github.io/tests/manual/_incoming/vm-availability-when-harvester-upgrade-failed/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/vm-availability-when-harvester-upgrade-failed/ - Category: Harvester Upgrade Verification Steps Prepare the previous stable Harvester release cluster Create image Enable Network and create VM Create several virtual machine Follow the official document steps to prepare the online or offline upgrade Do not shutdown virtual machine Start the upgrade Check the VM status if the upgrade failed at Preload images, Upgrade Rancher and Upgrade Harvester phase Check the VM status if the upgrade failed at the Pre-drain, Post-drain and RKE2 &amp; OS upgrade phase Expected Results The VM should be work when upgrade failed at Preload images, Upgrade Rancher and Upgrade Harvester phase The VM could not able to function well when upgrade failed at the Pre-drain, Post-drain and RKE2 &amp; OS upgrade phase + <h2 id="category">Category:</h2> <ul> <li>Harvester Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Harvester release cluster</li> <li>Create image</li> <li>Enable Network and create VM</li> <li>Create several virtual machine</li> <li>Follow the <a href="https://docs.harvesterhci.io/v1.0/upgrade/automatic/">official document steps</a> to prepare the online or offline upgrade</li> <li>Do not shutdown virtual machine</li> <li>Start the upgrade</li> <li>Check the VM status if the upgrade failed at <code>Preload images</code>, <code>Upgrade Rancher</code> and <code>Upgrade Harvester</code> phase</li> <li>Check the VM status if the upgrade failed at the <code>Pre-drain</code>, <code>Post-drain</code> and <code>RKE2 &amp; OS upgrade</code> phase</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should be work when upgrade failed at <code>Preload images</code>, <code>Upgrade Rancher</code> and <code>Upgrade Harvester</code> phase</li> <li>The VM could not able to function well when upgrade failed at the <code>Pre-drain</code>, <code>Post-drain</code> and <code>RKE2 &amp; OS upgrade</code> phase</li> </ol> Check version compatibility during an upgrade https://harvester.github.io/tests/manual/_incoming/2431-check-version-compatibility-during-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2431-check-version-compatibility-during-upgrade/ - Related issues: #2431 [FEATURE] Check version compatibility during an upgrade Category: Upgrade Verification Steps Test Plan 1: v1.0.2 upgrade to v1.1.0 with release tag Test Plan 2: v1.0.3 upgrade to v1.1.0 with release tag Test Plan 3: v1.0.2 upgrade to v1.1.0 without release tag Prepare v1.0.2, v1.0.3 Harvester ISO image Prepare v1.1.0 ISO image with release tag Prepare v1.1.0 ISO image without release tag Put different ISO image to HTTP server Create the upgrade yaml to create service cat &lt;&lt;EOF | kubectl apply -f - apiVersion: harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2431">#2431</a> [FEATURE] Check version compatibility during an upgrade</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-v102-upgrade-to-v110-with-release-tag">Test Plan 1: v1.0.2 upgrade to v1.1.0 with release tag</h3> <h3 id="test-plan-2-v103-upgrade-to-v110-with-release-tag">Test Plan 2: v1.0.3 upgrade to v1.1.0 with release tag</h3> <h3 id="test-plan-3-v102-upgrade-to-v110-without-release-tag">Test Plan 3: v1.0.2 upgrade to v1.1.0 without release tag</h3> <ol> <li>Prepare v1.0.2, v1.0.3 Harvester ISO image</li> <li>Prepare v1.1.0 ISO image with release tag</li> <li>Prepare v1.1.0 ISO image without release tag</li> <li>Put different ISO image to HTTP server</li> <li>Create the upgrade yaml to create service <pre tabindex="0"><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: harvesterhci.io/v1beta1 kind: Version metadata: name: v1.1.0 namespace: harvester-system spec: isoURL: &#34;http://192.168.1.110:8000/harvester-eeeb1be-dirty-amd64.iso&#34; EOF </code></pr Check VM creation required-fields https://harvester.github.io/tests/manual/virtual-machines/1283-vm-creation-required-fields/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1283-vm-creation-required-fields/ - Related issues: #1283 Fix required fields on VM creation page Verification Steps Create VM without image name and size Create VM without size Create VM wihout image name Create VM without hostname Expected Results You should get an error trying to create VM without image name and size You should get an error trying to create VM without image name You should get an error trying to create VM without size You should not get an error trying to create a VM without hostname + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1283">#1283</a> Fix required fields on VM creation page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create VM without image name and size</li> <li>Create VM without size</li> <li>Create VM wihout image name</li> <li>Create VM without hostname</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error trying to create VM without image name and size</li> <li>You should get an error trying to create VM without image name</li> <li>You should get an error trying to create VM without size</li> <li>You should not get an error trying to create a VM without hostname</li> </ol> Check volume status after upgrade https://harvester.github.io/tests/manual/_incoming/2920-volume-status-after-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2920-volume-status-after-upgrade/ - Related issues: #2920 [BUG] Volume can&rsquo;t turn into healthy when upgrading from v1.0.3 to v1.1.0-rc2 Category: Volume Verification Steps Prepare a 4 nodes v1.0.3 Harvester cluster Install several images Create three VMs Enable Network Create vlan1 network Shutdown all VMs Upgrade to v1.1.0-rc3 Check the volume status in Longhorn UI Open K9s, Check the pvc status after upgrade Expected Results Can finish the pre-drain of each node and successfully upgrade to v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2920">#2920</a> [BUG] Volume can&rsquo;t turn into healthy when upgrading from v1.0.3 to v1.1.0-rc2</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a 4 nodes v1.0.3 Harvester cluster</li> <li>Install several images</li> <li>Create three VMs</li> <li>Enable Network</li> <li>Create vlan1 network</li> <li>Shutdown all VMs</li> <li>Upgrade to v1.1.0-rc3</li> <li>Check the volume status in Longhorn UI</li> <li>Open K9s, Check the pvc status after upgrade</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Can finish the pre-drain of each node and successfully upgrade to v1.1.0-rc3 <img src="https://user-images.githubusercontent.com/29251855/196434398-a61b5111-7723-4fa6-ac57-2a68ffef73ee.png" alt="image"></p> Clone image (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2562-clone-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2562-clone-image/ - Related issues: #2562 [[BUG] Image&rsquo;s labels will not be copied when execute Clone Category: Images Verification Steps Install Harvester with any nodes Create a Image via URL Clone the Image and named image-b Check image-b labels in Labels tab Expected Results All labels should be cloned and shown in labels tab + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2562">#2562</a> [[BUG] Image&rsquo;s labels will not be copied when execute Clone</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create a Image via URL</li> <li>Clone the Image and named image-b</li> <li>Check image-b labels in Labels tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All labels should be cloned and shown in labels tab</li> </ol> Clone VM and don't select start after creation https://harvester.github.io/tests/manual/virtual-machines/clone-vm-and-dont-select-start-after-creation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-and-dont-select-start-after-creation/ - Case 1 Clone VM from Virtual Machine list and don&rsquo;t select start after creation Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list and don&rsquo;t select start after creation Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list and don&rsquo;t select start after creation</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine</li> <li>in Config</li> <li>In YAML</li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list and don&rsquo;t select start after creation</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine</li> <li>in Config</li> <li>In YAML</li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that is turned off https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-off/ - Case 1 Clone VM from Virtual Machine list that is turned off Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that is turned off Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that is turned off</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that is turned off</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that is turned on https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-on/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-on/ - Case 1 Clone VM from Virtual Machine list that is turned on Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that is turned on Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that is turned on</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that is turned on</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from existing volume https://harvester.github.io/tests/manual/virtual-machines/clone-vm-existing-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-existing-volume/ - Case 1 Clone VM from Virtual Machine list that was created from existing volume Expected Results When completing the clone you should get an error that the volume is already in use Case 2 Clone VM with volume from Virtual Machine list that was created from existing volume Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test.txt &amp;&amp; sync Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console file test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from existing volume</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>When completing the clone you should get an error that the volume is already in use</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from existing volume</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from image https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-image/ - Case 1 Clone VM from Virtual Machine list that was created from image Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was created from image Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from image</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from image</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from template https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-template/ - Case 1 Clone VM from Virtual Machine list that was created from template Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was created from template Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from template</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from template</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was not created from image https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-not-created-from-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-not-created-from-image/ - Case 1 Clone VM from Virtual Machine list that was not created from image Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was not created from image Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was not created from image</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was not created from image</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Cluster add labs https://harvester.github.io/tests/manual/node-driver/create-add-labs/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-add-labs/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results Use the command &ldquo;kubectl get node &ndash;show-labels&rdquo; to see the success of the added tabs Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Labels Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Use the command &ldquo;kubectl get node &ndash;show-labels&rdquo; to see the success of the added tabs</li> <li>Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Labels</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Cluster add Taints https://harvester.github.io/tests/manual/node-driver/cluster-add-taints/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-add-taints/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results Use the command kubectl describe node test-tain5 | grep Taint to see if Taint was added successfully. Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Taint Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Use the command <code>kubectl describe node test-tain5 | grep Taint</code> to see if Taint was added successfully.</li> <li>Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Taint</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Cluster TLS customization https://harvester.github.io/tests/manual/advanced/tls_customize/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/tls_customize/ - Ref: https://github.com/harvester/harvester/issues/1046 Verify Items Cluster&rsquo;s SSL/TLS parameters could be configured in install option Cluster&rsquo;s SSL/TLS parameters could be updated in dashboard Case: Configure TLS parameters in dashboard Install Harvester with any nodes Navigate to Advanced Settings, then edit ssl-parameters Select Protocols TLSv1.3, then save execute command echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 | grep &quot;Cipher is&quot; Output should contain error...SSL routines... and Cipher is (NONE) execute command echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_3 | grep &quot;Cipher is&quot; Output should contain Cipher is &lt;one_of_TLS1_3_Ciphers&gt;1 and should not contain error. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1046">https://github.com/harvester/harvester/issues/1046</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Cluster&rsquo;s SSL/TLS parameters could be configured in install option</li> <li>Cluster&rsquo;s SSL/TLS parameters could be updated in dashboard</li> </ul> <h2 id="case-configure-tls-parameters-in-dashboard">Case: Configure TLS parameters in dashboard</h2> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to Advanced Settings, then edit <code>ssl-parameters</code></li> <li>Select <strong>Protocols</strong> <code>TLSv1.3</code>, then save</li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>error...SSL routines...</code> and <code>Cipher is (NONE)</code></li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_3 | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>Cipher is &lt;one_of_TLS1_3_Ciphers&gt;</code><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> and should not contain <code>error...SSL...</code></li> <li>repeat Step 2, then select <strong>Protocols</strong> to <code>TLSv1.2</code> only, and input <strong>Ciphers</strong> <code>ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256</code></li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 -cipher 'ECDHE-ECDSA-AES256-GCM-SHA384' | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>error...SSL routines...</code> and <code>Cipher is (NONE)</code></li> </ol> <h2 id="case-configure-tls-parameters-in-install-configuration">Case: Configure TLS parameters in install configuration</h2> <ol> <li>Install harvester with PXE installation, set <code>ssl-parameters</code> in <code>system_settings</code> (see the <em>example</em> for more details)</li> <li>Harvester should be installed successfully</li> <li>Dashboard&rsquo;s <strong>ssl-parameters</strong> should be configured as expected</li> </ol> <pre tabindex="0"><code># example for ssl-parameters configure option system_settings: ssl-parameters: | { &#34;protocols&#34;: &#34;TLSv1.3&#34;, &#34;ciphers&#34;: &#34;TLS-AES-128-GCM-SHA256:TLS-AES-128-CCM-8-SHA256&#34; } </code></pr Cluster with Witness Node https://harvester.github.io/tests/manual/hosts/3266-witness-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/3266-witness-node/ - Witness node is a lightweight node only runs etcd which is not schedulable and also not for workloads. The main use case is to form a quorum with the other 2 nodes. Kubernetes need at least 3 etcd nodes to form a quorum, so Harvester also suggests using at least 3 nodes with similar hardware spec. This witness node feature aims for the edge case that user only have 2 powerful + 1 lightweight nodes thus helping benefit both cost and high availability. + <p>Witness node is a lightweight node only runs <strong>etcd</strong> which is not schedulable and also not for workloads. The main use case is to form a quorum with the other 2 nodes.</p> <p>Kubernetes need at least 3 <strong>etcd</strong> nodes to form a quorum, so Harvester also suggests using at least 3 nodes with similar hardware spec. This witness node feature aims for the edge case that user only have 2 powerful + 1 lightweight nodes thus helping benefit both cost and high availability.</p> collect Fleet logs and YAMLs in support bundles https://harvester.github.io/tests/manual/_incoming/2297_collect_fleet_logs_and_yamls_in_support_bundles/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2297_collect_fleet_logs_and_yamls_in_support_bundles/ - Ref: https://github.com/harvester/harvester/issues/2297 Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to support page Click Generate Support Bundle and do Generate log files should be exist in the zipfile of support bundle: logs/cattle-fleet-local-system/fleet-agent-&lt;randomID&gt;/fleet-agent.log logs/cattle-fleet-system/fleet-controller-&lt;randomID&gt;/fleet-controller.log logs/cattle-fleet-system/gitjob-&lt;randomID&gt;/gitjob.log + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2297">https://github.com/harvester/harvester/issues/2297</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to support page</li> <li>Click <strong>Generate Support Bundle</strong> and do Generate</li> <li>log files should be exist in the zipfile of support bundle: <ul> <li><code>logs/cattle-fleet-local-system/fleet-agent-&lt;randomID&gt;/fleet-agent.log</code></li> <li><code>logs/cattle-fleet-system/fleet-controller-&lt;randomID&gt;/fleet-controller.log</code></li> <li><code>logs/cattle-fleet-system/gitjob-&lt;randomID&gt;/gitjob.log</code></li> </ul> </li> </ol> Collect system logs https://harvester.github.io/tests/manual/_incoming/2647_collect_system_logs/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2647_collect_system_logs/ - Ref: https://github.com/harvester/harvester/issues/2647 Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Logging/Event Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Logging Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2647">https://github.com/harvester/harvester/issues/2647</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Logging/Event</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Logging</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Config logging in Harvester Dashboard https://harvester.github.io/tests/manual/_incoming/2646_config_logging_in_harvester_dashboard/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2646_config_logging_in_harvester_dashboard/ - Ref: https://github.com/harvester/harvester/issues/2646 Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Configurations of Fluentbit and Fluentd should be available in Logging/Configuration + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2646">https://github.com/harvester/harvester/issues/2646</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/191697822-6bc0d7b8-2c56-42e0-805a-408c1ef19845.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/191697860-7ef66c19-cd3e-4e4c-b485-315e7eec771d.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Configurations of Fluentbit and Fluentd should be available in <em>Logging/Configuration</em></li> </ol> Configure VLAN interface on ISO installer UI https://harvester.github.io/tests/manual/_incoming/1647-configure-vlan-interface-on-iso-installer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1647-configure-vlan-interface-on-iso-installer/ - Related issues: #1647 [FEATURE] Support configuring a VLAN at the management interface in the ISO installer UI Category: Network Harvester Installer Environment Setup Prepare a No VLAN network environment Prepare a VLAN network environment Verification Steps Boot Harvester ISO installer Set VLAN id or keep empty Keep installing Check can complete installation Check harvester has network connectivity Test Plan Matrix Create mode No VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip Join mode No VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip Expected Results Check can complete installation Check harvester has network connectivity ip a show dev mgmt-br [VLAN ID] has IP e. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1647">#1647</a> [FEATURE] Support configuring a VLAN at the management interface in the ISO installer UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> <li>Harvester Installer</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare a <code>No</code> VLAN network environment</li> <li>Prepare a <code>VLAN</code> network environment</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Boot Harvester ISO installer</li> <li>Set VLAN id or keep empty</li> <li>Keep installing</li> <li>Check can complete installation</li> <li>Check harvester has network connectivity</li> </ol> <h2 id="test-plan-matrix">Test Plan Matrix</h2> <h3 id="create-mode">Create mode</h3> <h4 id="no-vlan">No VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h4 id="vlan">VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h3 id="join-mode">Join mode</h3> <h4 id="no-vlan-1">No VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h4 id="vlan-1">VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check can complete installation</li> <li>Check harvester has network connectivity</li> <li><code>ip a show dev mgmt-br [VLAN ID]</code> has IP</li> <li>e.g ip a show dev mgmt-br.100</li> </ol> CPU overcommit on VM (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/cpu_overcommit/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/cpu_overcommit/ - Ref: https://github.com/harvester/harvester/issues/1429 Verify Items Overcommit can be edit on Dashboard VM can allocate exceed CPU on the host Node VM can chage allocated CPU after created Case: Update Overcommit configuration Install Harvester with any Node Login to Dashboard, then navigate to Advanced Settings Edit overcommit-config The field of CPU should be editable Created VM can allocate maximum CPU should be &lt;HostCPUs&gt; * [&lt;overcommit-CPU&gt;/100] - &lt;Host Reserved&gt; Case: VM can allocate CPUs more than Host have Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostCPUs&gt; * 5 CPUs VM should start successfully lscpu in VM should display allocated CPUs Page of Virtual Machines should display allocated CPUs correctly Case: Update VM allocated CPUs Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostCPUs&gt; * 5 CPUs VM should start successfully Increase/Reduce VM allocated CPUs to minimum/maximum VM should start successfully lscpu in VM should display allocated CPUs Page of Virtual Machines should display allocated CPUs correctly + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1429">https://github.com/harvester/harvester/issues/1429</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Overcommit can be edit on Dashboard</li> <li>VM can allocate exceed CPU on the host Node</li> <li>VM can chage allocated CPU after created</li> </ul> <h2 id="case-update-overcommit-configuration">Case: Update Overcommit configuration</h2> <ol> <li>Install Harvester with any Node</li> <li>Login to Dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <code>overcommit-config</code></li> <li>The field of <strong>CPU</strong> should be editable</li> <li>Created VM can allocate maximum CPU should be <code>&lt;HostCPUs&gt; * [&lt;overcommit-CPU&gt;/100] - &lt;Host Reserved&gt;</code></li> </ol> <h2 id="case-vm-can-allocate-cpus-more-than-host-have">Case: VM can allocate CPUs more than Host have</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostCPUs&gt; * 5</code> CPUs</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated CPUs</li> <li>Page of Virtual Machines should display allocated CPUs correctly</li> </ol> <h2 id="case-update-vm-allocated-cpus">Case: Update VM allocated CPUs</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostCPUs&gt; * 5</code> CPUs</li> <li>VM should start successfully</li> <li>Increase/Reduce VM allocated CPUs to minimum/maximum</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated CPUs</li> <li>Page of Virtual Machines should display allocated CPUs correctly</li> </ol> Create a 3 nodes harvester cluster with RKE1 (only with mandatory info, other values stays with default) https://harvester.github.io/tests/manual/node-driver/create-3-node-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-3-node-rke1/ - From the Rancher home page, click on Create Select RKE1 on the right and click on Harvester Enter a cluster name Give a prefix name for the VMs Increase count to 3 nodes Check etcd, Control Plane and Worker boxes Select or create a node template if needed Click on Add node template Create credentials by selecting your harvester cluster Fill the instance option fields, pay attention to correctly write the default ssh user of the chosen image in the SSH user field Give a name to the rancher template and click on Create Click on create to spin the cluster up Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The 3 nodes should be with the active status + <ol> <li>From the Rancher home page, click on Create</li> <li>Select RKE1 on the right and click on Harvester</li> <li>Enter a cluster name</li> <li>Give a prefix name for the VMs</li> <li>Increase count to 3 nodes</li> <li>Check etcd, Control Plane and Worker boxes</li> <li>Select or create a node template if needed <ul> <li>Click on Add node template</li> <li>Create credentials by selecting your harvester cluster</li> <li>Fill the instance option fields, pay attention to correctly write the default ssh user of the chosen image in the SSH user field</li> <li>Give a name to the rancher template and click on Create</li> </ul> </li> <li>Click on create to spin the cluster up</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The 3 nodes should be with the active status</li> </ol> Create a 3 nodes harvester cluster with RKE2 (only with mandatory info, other values stays with default) https://harvester.github.io/tests/manual/node-driver/create-3-node-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-3-node-rke2/ - From the Rancher home page, click on Create Select RKE2 on the right and click on Harvester Create the credential to talk with the harvester provider Select your harvester cluster (external or internal) Enter a cluster name Increase machine count to 3 Fill the mandatory fields Namespace Image Network SSH User (default ssh user of the chosen image) Click on create to spin the cluster up Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The 3 nodes should be with the active status + <ol> <li>From the Rancher home page, click on Create</li> <li>Select RKE2 on the right and click on Harvester</li> <li>Create the credential to talk with the harvester provider <ul> <li>Select your harvester cluster (external or internal)</li> </ul> </li> <li>Enter a cluster name</li> <li>Increase machine count to 3</li> <li>Fill the mandatory fields <ul> <li>Namespace</li> <li>Image</li> <li>Network</li> <li>SSH User (default ssh user of the chosen image)</li> </ul> </li> <li>Click on create to spin the cluster up</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The 3 nodes should be with the active status</li> </ol> Create a harvester cluster and add Taint to a node https://harvester.github.io/tests/manual/node-driver/q-cluster-add-taint/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-add-taint/ - Expected Results + <h2 id="expected-results">Expected Results</h2> Create a harvester cluster with 3 master nodes https://harvester.github.io/tests/manual/node-driver/add-3-master-nodes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/add-3-master-nodes/ - add a harvester node template Create harvester cluster count set to 3 Expected Results The status of the created cluster shows active show the 3 created node status running in harvester&rsquo;s vm list the information displayed on rancher and harvester matches the template configuration + <ol> <li>add a harvester node template</li> <li>Create harvester cluster count set to 3</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>show the 3 created node status running in harvester&rsquo;s vm list</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> Create a harvester cluster with a non-default version of k8s https://harvester.github.io/tests/manual/node-driver/cluster-non-default-k8s/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-non-default-k8s/ - Verify versions 1.19.10, 1.18.18, 1.17.17, 1.16.15 respectively Expected Results k8s displayed on the UI is consistent with the created version (cluster list, host list) Use kubectl version to see that the version information is the same as the created version + <ol> <li>Verify versions <code>1.19.10</code>, <code>1.18.18</code>, <code>1.17.17</code>, <code>1.16.15</code> respectively</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>k8s displayed on the UI is consistent with the created version (cluster list, host list)</li> <li>Use <code>kubectl version</code> to see that the version information is the same as the created version</li> </ol> Create a harvester cluster with different images https://harvester.github.io/tests/manual/node-driver/cluster-different-images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-images/ - d a harvester node template Set the image, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values ubuntu-18.04-server-cloudimg-amd64.img focal-server-cloudimg-amd64-disk-kvm.img Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration The drop-down list of images in the harvester node template corresponds to the list of images in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>d a harvester node template</li> <li>Set the image, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>ubuntu-18.04-server-cloudimg-amd64.img</li> <li>focal-server-cloudimg-amd64-disk-kvm.img</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of images in the harvester node template corresponds to the list of images in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create a harvester cluster, template drop-down list validation https://harvester.github.io/tests/manual/node-driver/cluster-template-dropdown-multi-user/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-template-dropdown-multi-user/ - Create multiple harvester Node Templates with different users Add harvester cluster and set Template Expected Results pop up a template list pop-up box Show the templates you created and the templates created by other users + <ol> <li>Create multiple harvester Node Templates with different users</li> <li>Add harvester cluster and set Template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>pop up a template list pop-up box</li> <li>Show the templates you created and the templates created by other users</li> </ol> Create a harvester-specific StorageClass for Longhorn https://harvester.github.io/tests/manual/_incoming/2692_create_a_harvester-specific_storageclass_for_longhorn/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2692_create_a_harvester-specific_storageclass_for_longhorn/ - Ref: https://github.com/harvester/harvester/issues/2692 Verify Steps: Install Harvester with 2+ nodes Login to Dashboard and create an image for VM creation Navigate to Advanced/Storage Classes, harvester-longhorn and longhorn should be available, and harvester-longhorn should be settled as Default Navigate to Volumes and create vol-old where Storage Class is longhorn and vol-new where Storage Class is harvester-longhorn Create VM vm1 attaching vol-old and vol-new Login to vm1 and use fdisk format volumes and mount to folders: old and new Create file and move into both volumes as following commands: dd if=/dev/zero of=file1 bs=10485760 count=10 cp file1 old &amp;&amp; cp file1 new Migrate vm1 to another host, migration should success Login to vm1, volumes should still attaching to folders old and new Execute command sha256sum on old/file1 and new/file1 should show the same value. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2692">https://github.com/harvester/harvester/issues/2692</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192323716-c863af2a-388f-49d6-8636-d57f8abbad35.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with 2+ nodes</li> <li>Login to Dashboard and create an image for VM creation</li> <li>Navigate to <em>Advanced/Storage Classes</em>, <code>harvester-longhorn</code> and <code>longhorn</code> should be available, and <code>harvester-longhorn</code> should be settled as <strong>Default</strong></li> <li>Navigate to <em>Volumes</em> and create <code>vol-old</code> where Storage Class is <code>longhorn</code> and <code>vol-new</code> where Storage Class is <code>harvester-longhorn</code></li> <li>Create VM <code>vm1</code> attaching <code>vol-old</code> and <code>vol-new</code></li> <li>Login to <code>vm1</code> and use <code>fdisk</code> format volumes and mount to folders: <code>old</code> and <code>new</code></li> <li>Create file and move into both volumes as following commands:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>dd <span style="color:#66d9ef">if</span><span style="color:#f92672">=</span>/dev/zero of<span style="color:#f92672">=</span>file1 bs<span style="color:#f92672">=</span><span style="color:#ae81ff">10485760</span> count<span style="color:#f92672">=</span><span style="color:#ae81ff">10</span> </span></span><span style="display:flex;"><span>cp file1 old <span style="color:#f92672">&amp;&amp;</span> cp file1 new </span></span></code></pr Create a new VM and add Enable USB tablet option (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-enable-usb-tablet-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-enable-usb-tablet-option/ - Add Enable usb tablet Option Save/Create VM Expected Results Machine starts successfully Enable usb tablet shows In YAML In Form + <ol> <li>Add Enable usb tablet Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Enable usb tablet shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> </ol> Create a new VM and add Install guest agent option (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-install-guest-agent-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-install-guest-agent-option/ - Add install Guest Agent Option Save/Create VM Validate that qemu-guest-agent was installed You can do this on ubuntu with the command dpkg -l | grep qemu Expected Results Machine starts successfully Guest Agent Option shows In YAML In Form Guest Agent is installed + <ol> <li>Add install Guest Agent Option</li> <li>Save/Create VM</li> <li>Validate that qemu-guest-agent was installed <ul> <li>You can do this on ubuntu with the command <code>dpkg -l | grep qemu</code></li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Guest Agent Option shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Guest Agent is installed</li> </ol> Create a new VM with Network Data from the form (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-the-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-the-form/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li> <p>Add Network Data to the VM</p> <ul> <li> <p>Here is an example of Network Data config to add DHCP to the physical interface eth0</p> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> </ul> </li> <li> <p>Save/Create the VM</p> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Create a new VM with Network Data from YAML (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-yaml/ - Add Network Data to the VM via YAML Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM via YAML <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0 <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> </ul> </li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Create a new VM with User Data from the form https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-user-data-from-the-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-user-data-from-the-form/ - Add User data to the VM Here is an example of user data config to add a password #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should exist In YAML In Form Machine should have user password set + <ol> <li>Add User data to the VM</li> </ol> <ul> <li>Here is an example of user data config to add a password <code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM</code></li> </ul> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should exist <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Create a VM on a VLAN with an existing machine and then change the existing machine's VLAN https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-on-a-vlan-with-an-existing-machine-and-then-change-the-existing-machines-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-on-a-vlan-with-an-existing-machine-and-then-change-the-existing-machines-vlan/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create a VM through the Rancher dashboard https://harvester.github.io/tests/manual/harvester-rancher/1613-create-vm-through-rancher-dashboard/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1613-create-vm-through-rancher-dashboard/ - Related issues: #1613 VM memory shows NaN Gi Verification Steps import harvester into rancher&rsquo;s virtualization management Load Harvester dashboard by going to virtualization management then clicking on harvester cluster Create a new VM on Harvester Validate the following in the VM list page, the form, and YAML&gt; Memory CPU Disk space Expected Results VM should create VM should start All specifications should show correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1613">#1613</a> VM memory shows NaN Gi</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>import harvester into rancher&rsquo;s virtualization management</li> <li>Load Harvester dashboard by going to virtualization management then clicking on harvester cluster</li> <li>Create a new VM on Harvester</li> <li>Validate the following in the VM list page, the form, and YAML&gt; <ol> <li>Memory</li> <li>CPU</li> <li>Disk space</li> </ol> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should start</li> <li>All specifications should show correctly</li> </ol> Create a VM with 2 networks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-2-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-2-networks/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work Comments one default management network and one VLAN + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> <h3 id="comments">Comments</h3> <p>one default management network and one VLAN</p> Create a vm with all the default values (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-all-the-default-values/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-all-the-default-values/ - Create a VM with all default values Save Expected Results VM should save VM should start if start after creation checkbox is checked Config should show In Form In YAML + <ol> <li>Create a VM with all default values</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should start if start after creation checkbox is checked</li> <li>Config should show <ul> <li>In Form</li> <li>In YAML</li> </ul> </li> </ol> Create a VM with Start VM on Creation checked (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-checked/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-checked/ - Create VM Expected Results VM should start Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation + <ol> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start</li> <li>Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation</li> </ol> Create a VM with start VM on creation unchecked (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-unchecked/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-unchecked/ - Create VM Expected Results VM should start or not start as appropriate Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation + <ol> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start or not start as appropriate</li> <li>Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation</li> </ol> Create Backup Target (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/create-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/create-backup-target/ - Open up Backup-target in settings Input server info Save Expected Results Backup Target should show in settings + <ol> <li>Open up Backup-target in settings</li> <li>Input server info</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup Target should show in settings</li> </ol> Create harvester cluster using non-default CPUs, Memory, Disk https://harvester.github.io/tests/manual/node-driver/cluster-non-default-resources/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-non-default-resources/ - add a harvester node template The set CPUs, Memory, and Disk values, refer to &ldquo;Test Data&rdquo; for other values Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:4 Memorys:8 Disk:50 Bus:Virtlo Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>The set CPUs, Memory, and Disk values, refer to &ldquo;Test Data&rdquo; for other values</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:4 Memorys:8 Disk:50 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create harvester clusters with different Bus https://harvester.github.io/tests/manual/node-driver/cluster-different-bus/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-bus/ - add a harvester node template Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values VirtIO SATA SCSI Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration The drop-down list of &ldquo;BUS&rdquo; in the harvester node template corresponds to the list of “BUS” in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>VirtIO</li> <li>SATA</li> <li>SCSI</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of &ldquo;BUS&rdquo; in the harvester node template corresponds to the list of “BUS” in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create harvester clusters with different Networks https://harvester.github.io/tests/manual/node-driver/cluster-different-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-networks/ - add a harvester node template Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values vlan1 vlan2 Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration The drop-down list of &ldquo;Network Name&rdquo; in the harvester node template corresponds to the list of “Network Name” in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>vlan1</li> <li>vlan2</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of &ldquo;Network Name&rdquo; in the harvester node template corresponds to the list of “Network Name” in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1/vlan2 SSH User: opensuse </code></pr Create image from Volume(e2e_fe) https://harvester.github.io/tests/manual/volumes/create-image-from-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-image-from-volume/ - Create new VM Add SSH key Run through iterations for 1, 2, and 3 for attached bash script Export volume to image from volumes page Create new VM from image Run md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5 Expected Results image should upload/complete in images page New VM should create SSH key should work on new VM file2.md5 should fail and the other three md5 checks should pass Comments #!/bin/bash # first file if [ $1 = 1 ] then dd if=/dev/urandom of=file1. + <ol> <li>Create new VM</li> <li>Add SSH key</li> <li>Run through iterations for 1, 2, and 3 for attached bash script</li> <li>Export volume to image from volumes page</li> <li>Create new VM from image</li> <li>Run <code>md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image should upload/complete in images page</li> <li>New VM should create</li> <li>SSH key should work on new VM</li> <li>file2.md5 should fail and the other three md5 checks should pass</li> </ol> <h3 id="comments">Comments</h3> <pre tabindex="0"><code>#!/bin/bash # first file if [ $1 = 1 ] then dd if=/dev/urandom of=file1.txt count=100 bs=1M md5sum file1.txt &gt; file1.md5 md5sum -c file1.md5 fi ## overwrite file1 and create file2 if [ $1 = 2 ] then dd if=/dev/urandom of=file1.txt count=100 bs=1M dd if=/dev/urandom of=file2.txt count=100 bs=1M md5sum file1.txt &gt; file1-2.md5 md5sum file2.txt &gt; file2.md5 md5sum -c file1.md5 file1-2.md5 file2.md5 fi ## overwrite file2 and create file3 if [ $1 = 3 ] then dd if=/dev/urandom of=file2.txt count=100 bs=1M dd if=/dev/urandom of=file3.txt count=100 bs=1M md5sum file2.txt &gt; file2-2.md5 md5sum file3.txt &gt; file3.md5 md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5 fi </code></pr Create Images with valid image URL (e2e_be_fe) https://harvester.github.io/tests/manual/images/create-images-with-valid-image-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/create-images-with-valid-image-url/ - Create image with cloud image available for openSUSE. http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Expected Results Image should show state as Active. Check the backing image in Longhorn Known Bugs https://github.com/harvester/harvester/issues/1269 + <ol> <li>Create image with cloud image available for openSUSE. <a href="http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2">http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should show state as Active.</li> <li>Check the backing image in Longhorn</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1269">https://github.com/harvester/harvester/issues/1269</a></p> Create multiple instances of the vm with ISO image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-iso-image/ - Create images using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create multiple instances of the vm with raw image (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-raw-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-raw-image/ - Create images using the external path for cloud image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start. Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for cloud image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create multiple instances of the vm with Windows Image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-windows-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-windows-image/ - Create images using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start. Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create multiple VM instances using VM template with EFI mode selected https://harvester.github.io/tests/manual/_incoming/2577-create-multiple-vm-using-template-efi-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2577-create-multiple-vm-using-template-efi-mode/ - Related issues: #2577 [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected. Category: Virtual Machine Verification Steps Create a VM template, check the Booting in EFI mode Create multiple VM instance and use the VM template have Booting in EFI mode checked Wait for all VM running Check the EFI mode is enabled in VM config ssh to each VM Check the /etc/firmware/efi file Expected Results Can create multiple VM instance using VM template with EFI mode selected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2577">#2577</a> [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected.</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM template, check the <code>Booting in EFI mode</code></li> <li>Create multiple VM instance and use the VM template have <code>Booting in EFI mode</code> checked</li> <li>Wait for all VM running</li> <li>Check the EFI mode is enabled in VM config</li> <li>ssh to each VM</li> <li>Check the /etc/firmware/efi file</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Can create multiple VM instance using VM template with EFI mode selected</p> Create new network (e2e_be_fe) https://harvester.github.io/tests/manual/network/create-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/create-network/ - Navigate to the networks page in harvester Click Create Add a name Add a VLAN ID Click Create Expected Results You should be able to add the VLAN You should see the VLAN show up in the networks page + <ol> <li>Navigate to the networks page in harvester</li> <li>Click Create</li> <li>Add a name</li> <li>Add a VLAN ID</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to add the VLAN</li> <li>You should see the VLAN show up in the networks page</li> </ol> Create new VM with a machine type of PC (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-pc/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Create new VM with a machine type of q35 (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-q35/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Create one VM on a VLAN and then move another VM to that VLAN https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-and-then-move-another-vm-to-that-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-and-then-move-another-vm-to-that-vlan/ - Create/edit VM/VMs with the appropriate VLAN Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should be able to connect on network This can be verified with a ping over the IP, or via other options if ICMP is disabled + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should be able to connect on network <ul> <li>This can be verified with a ping over the IP, or via other options if ICMP is disabled</li> </ul> </li> </ol> Create one VM on a VLAN that has other VMs then change it to a different VLAN https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-that-has-other-vms-then-change-it-to-a-different-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-that-has-other-vms-then-change-it-to-a-different-vlan/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create RKE2 cluster with no cloud provider https://harvester.github.io/tests/manual/harvester-rancher/1577-create-rke2-cluster-no-cloud-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1577-create-rke2-cluster-no-cloud-provider/ - Related issues: #1577 Option to disable load balancer feature in cloud provider Verification Steps Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select existing cluster in the Imprted Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Select None for cloud provider Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services by clicking manage + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1577">#1577</a> Option to disable load balancer feature in cloud provider</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imprted Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Select <code>None</code> for cloud provider <img src="https://user-images.githubusercontent.com/4569037/142971322-f34a9c6d-095e-4dcc-9981-103bee4453ff.png" alt="image"></li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cbd9cc9b-60fb-4e81-985a-13fcaa88fa2f" alt="image.png"></p> Create Single instances of the vm with ISO image https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-pc/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with ISO image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with ISO image with machine type pc https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-q35/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with raw image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-raw-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-raw-image/ - Create vm using the external path for cloud image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for cloud image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with Windows Image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-windows-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-windows-image/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create SSH key from templates page https://harvester.github.io/tests/manual/authentication/1619-create-ssh-key-from-templates-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/1619-create-ssh-key-from-templates-page/ - Related issues: #1619 User is unable to create ssh key through the templates page Verification Steps on a harvester deployment, navigate to advanced -&gt; templates and click create Click create new under SSH section enter valid credentials and save Expected Results SSH key should be created and show in the SSH key section + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1619">#1619</a> User is unable to create ssh key through the templates page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>on a harvester deployment, navigate to advanced -&gt; templates and click create</li> <li>Click create new under SSH section</li> <li>enter valid credentials and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH key should be created and show in the SSH key section</li> </ol> Create SSH key from templates page https://harvester.github.io/tests/manual/templates/1619-create-ssh-key-from-templates-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/1619-create-ssh-key-from-templates-page/ - Related issues: #1619 User is unable to create ssh key through the templates page Verification Steps on a harvester deployment, navigate to advanced -&gt; templates and click create Click create new under SSH section enter valid credentials and save Expected Results SSH key should be created and show in the SSH key section + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1619">#1619</a> User is unable to create ssh key through the templates page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>on a harvester deployment, navigate to advanced -&gt; templates and click create</li> <li>Click create new under SSH section</li> <li>enter valid credentials and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH key should be created and show in the SSH key section</li> </ol> Create support bundle in multi-node Harvester cluster with one node off https://harvester.github.io/tests/manual/misc/1524-create-support-bundle-with-one-node-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1524-create-support-bundle-with-one-node-off/ - Related issues: #1524 Can&rsquo;t create support bundle if one node is off Verification Steps On a multi-node harvester cluster power off one node Navigate to support create support bundle Expected Results Support bundle should create and be downloaded YOu should be able to extract and examine support bundle + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1524">#1524</a> Can&rsquo;t create support bundle if one node is off</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>On a multi-node harvester cluster power off one node</li> <li>Navigate to support create support bundle</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Support bundle should create and be downloaded</li> <li>YOu should be able to extract and examine support bundle</li> </ol> Create two VMs in the same VLAN (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-in-the-same-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-in-the-same-vlan/ - Create/edit VM/VMs with the appropriate VLAN Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should be able to connect on network This can be verified with a ping over the IP, or via other options if ICMP is disabled + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should be able to connect on network <ul> <li>This can be verified with a ping over the IP, or via other options if ICMP is disabled</li> </ul> </li> </ol> Create two VMs on separate VLANs https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-separate-vlans/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-separate-vlans/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create two VMs on the same VLAN and change one https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-the-same-vlan-and-change-one/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-the-same-vlan-and-change-one/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create VM and add SSH key (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-and-add-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-and-add-ssh-key/ - Create VM Add SSH key if not already in VM Logon with SSH Expected Results You should be prompted for SSH key passphrase if appropriate You should connect You should be able to execute shell commands The SSH Key should show in the SSH key list + <ol> <li>Create VM</li> <li>Add SSH key if not already in VM</li> <li>Logon with SSH</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be prompted for SSH key passphrase if appropriate</li> <li>You should connect</li> <li>You should be able to execute shell commands</li> <li>The SSH Key should show in the SSH key list</li> </ol> Create vm using a template of default version https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of default version with machine type pc https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-pc/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of default version with machine type q35 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-q35/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of non-default version (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-non-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-non-default-version/ - Create a new VM with a template of non-default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info + <ol> <li>Create a new VM with a template of non-default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> </ol> Create vm with both CPU and Memory not in cluster (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-both-cpu-and-memory-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-both-cpu-and-memory-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned The VM should not create until errors are resolved</li> </ol> Create vm with CPU not in cluster. (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-cpu-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-cpu-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned The VM should not create until errors are resolved</li> </ol> Create VM with existing Volume (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-existing-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-existing-volume/ - Create VM with an existing volume Expected Results VM should create and start You should be able to open the console for the VM and see it boot Volume should show in volumes list VM should appear to the &ldquo;Attached VM&rdquo; column of the existing volume + <ol> <li>Create VM with an existing volume</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create and start</li> <li>You should be able to open the console for the VM and see it boot</li> <li>Volume should show in volumes list</li> <li>VM should appear to the &ldquo;Attached VM&rdquo; column of the existing volume</li> </ol> Create vm with Memory not in cluster. (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-memory-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-memory-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned</li> <li>The VM should not create until errors are resolved</li> </ol> Create VM with resources that are only on one node in cluster CPU https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster CPU (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster CPU and Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster Memory https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with saved SSH key (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-saved-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-saved-ssh-key/ - Create VM Add SSH key if not already in VM Logon with SSH Expected Results You should be prompted for SSH key passphrase if appropriate You should connect You should be able to execute shell commands The SSH Key should show in the SSH key list + <ol> <li>Create VM</li> <li>Add SSH key if not already in VM</li> <li>Logon with SSH</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be prompted for SSH key passphrase if appropriate</li> <li>You should connect</li> <li>You should be able to execute shell commands</li> <li>The SSH Key should show in the SSH key list</li> </ol> Create VM with the default network (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-the-default-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-the-default-network/ - Create a VM with the default network Let VM boot up after creation Expected Results VM should start VM should be able to ping other machines in the VLAN VM should be able to ping servers on the internet if the VLAN has external access + <ol> <li>Create a VM with the default network</li> <li>Let VM boot up after creation</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start</li> <li>VM should be able to ping other machines in the VLAN</li> <li>VM should be able to ping servers on the internet if the VLAN has external access</li> </ol> Create VM with two disk volumes (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-two-disk-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-two-disk-volumes/ - Create a VM with the appropriate number of volumes Expected Results Verify after creation that the appropriate volumes are in the config for the VM Verify that the volumes are created and listed in the volumes section + <ol> <li>Create a VM with the appropriate number of volumes</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Verify after creation that the appropriate volumes are in the config for the VM</li> <li>Verify that the volumes are created and listed in the volumes section</li> </ol> Create VM without memory provided (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-without-memory-provided/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-without-memory-provided/ - Related issues: #1477 intimidating error message when missing mandatory field Category Virtual Machine Verification Steps Create some image and volume Create virtual machine Fill out all mandatory field but leave memory blank. Click create Expected Results Leave empty memory field empty when create virtual machine will show &ldquo;Memory is required&rdquo; error message + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1477">#1477</a> intimidating error message when missing mandatory field</li> </ul> <h2 id="category">Category</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create some image and volume</li> <li>Create virtual machine</li> <li>Fill out all mandatory field but leave memory blank.</li> <li>Click create</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Leave empty memory field empty when create virtual machine will show &ldquo;Memory is required&rdquo; error message</p> <p><img src="https://user-images.githubusercontent.com/29251855/140006054-92b12a07-af8b-4087-9fc8-4cf76c6500ea.png" alt="image"></p> Create Volume root disk blank Form with label https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-blank-form-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-blank-form-label/ - Navigate to volumes page Click Create Don&rsquo;t select an image Input a size Click Create Expected Results Page should load Volume should create successfully and go to succeeded in the list The label can be seen when you edit the volume config + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Don&rsquo;t select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>Volume should create successfully and go to succeeded in the list</li> <li>The label can be seen when you edit the volume config</li> </ol> Create volume root disk VM Image Form with label (e2e_be) https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form-label/ - Navigate to volumes page Click Create Select an image Input a size Click Create Expected Results Page should load Volume should create successfully and go to succeeded in the list The label can be seen when you edit the volume config + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>Volume should create successfully and go to succeeded in the list</li> <li>The label can be seen when you edit the volume config</li> </ol> Create volume root disk VM Image Form(e2e_fe) https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form/ - Navigate to volumes page Click Create Select an image Input a size Click Create Expected Results VM should create VM should pass health checks + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should pass health checks</li> </ol> Create Windows VM https://harvester.github.io/tests/manual/virtual-machines/create-windows-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-windows-vm/ - Create a VM with the VM template with windows-iso-image-base-temp Config the CPU and Memory to 4 and 8 respectively Select the windows ISO image Click the Volumes tab and update the root disk size to 50GB Click create to launch the windows VM Optional: you can increase the second disk size or add an additional one. Click create to launch the VM (this will take a couple of minutes upon your network speed of download the ISO image) Click the Console to launch a VNC console of the windows server, and you will need to find an evaluation key of the windows server 2012 installation. + <ol> <li>Create a VM with the VM template with windows-iso-image-base-temp</li> <li>Config the CPU and Memory to 4 and 8 respectively</li> <li>Select the windows ISO image</li> <li>Click the Volumes tab and update the root disk size to 50GB</li> <li>Click create to launch the windows VM</li> <li>Optional: you can increase the second disk size or add an additional one.</li> <li>Click create to launch the VM (this will take a couple of minutes upon your network speed of download the ISO image)</li> <li>Click the Console to launch a VNC console of the windows server, and you will need to find an evaluation key of the windows server 2012 installation.</li> <li>Optional: you may continue to create other VMs as described in the below doc to skip the image downloading and installation times.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start.</li> </ol> Create with invalid image (e2e_be_fe) https://harvester.github.io/tests/manual/images/negative-create-with-invalid-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/negative-create-with-invalid-image/ - Create image with invalid URL. e.g. - https://test.img Expected Results Image state show as Failed + <ol> <li>Create image with invalid URL. e.g. - <a href="https://test.img">https://test.img</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image state show as Failed</li> </ol> Dashboard Storage usage display when node disk have warning https://harvester.github.io/tests/manual/_incoming/2622-dashboard-storage-usage-display-when-node-disk-have-warning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2622-dashboard-storage-usage-display-when-node-disk-have-warning/ - Related issues: #2622 [BUG] Dashboard Storage used is wrong when a node disk is warning Category: Storage Verification Steps Login harvester dashboard Access Longhorn UI from url https://192.168.122.136/dashboard/c/local/longhorn Go to Node page Click edit node and disks Select disabling Node scheduling Select disabling storage scheduling on the bottom Open Longhorn dashboard page, check the Storage Schedulable Open Harvester dashboard page, check the used and scheduled storage size Expected Results After disabling the node and storage scheduling on Longhorn UI. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2622">#2622</a> [BUG] Dashboard Storage used is wrong when a node disk is warning</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Access Longhorn UI from url https://192.168.122.136/dashboard/c/local/longhorn</li> <li>Go to Node page</li> <li>Click edit node and disks</li> <li>Select disabling Node scheduling <img src="https://user-images.githubusercontent.com/29251855/187578343-653d0235-92a9-4979-aae0-b62b606df525.png" alt="image"></li> <li>Select disabling storage scheduling on the bottom <img src="https://user-images.githubusercontent.com/29251855/187578175-326b5909-cd6a-4e31-a1cf-92df5e619a5c.png" alt="image"></li> <li>Open Longhorn dashboard page, check the Storage Schedulable</li> <li>Open Harvester dashboard page, check the used and scheduled storage size</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>After disabling the node and storage scheduling on Longhorn UI.</p> datavolumes.cdi.kubevirt.io https://harvester.github.io/tests/manual/webhooks/datavolumes.cdi.kubevirt.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/datavolumes.cdi.kubevirt.io/ - GUI Create a VM in GUI and wait until it&rsquo;s running. Assume its name is test. kube-api Try to delete its datavolume: $ kubectl get vms NAME AGE STATUS READY test 5m16s Running True There should be an datavolume bound to that VM $ kubectl get dvs NAME PHASE PROGRESS RESTARTS AGE test-disk-0-klrft Succeeded 100.0% 5m18s The user should not be able to delete the datavolume $ kubectl delete dv test-disk-0-klrft The request is invalid: : can not delete the volume test-disk-0-klrft which is currently attached to VMs: default/test `` ## Expected Results ### kube-api The deletion of its datavolume should fail. + <h2 id="gui">GUI</h2> <ol> <li>Create a VM in GUI and wait until it&rsquo;s running. Assume its name is test.</li> </ol> <h3 id="kube-api">kube-api</h3> <ol> <li>Try to delete its datavolume:</li> </ol> <pre tabindex="0"><code>$ kubectl get vms NAME AGE STATUS READY test 5m16s Running True </code></pre><ol> <li>There should be an datavolume bound to that VM</li> </ol> <pre tabindex="0"><code>$ kubectl get dvs NAME PHASE PROGRESS RESTARTS AGE test-disk-0-klrft Succeeded 100.0% 5m18s </code></pr Deactivate/activate/delete Harvester Node Driver https://harvester.github.io/tests/manual/node-driver/deactivate-activate-deletenode-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/deactivate-activate-deletenode-driver/ - With Rancher &lt; 2.6: Tools-&gt;Driver Management→Node Driver Deactivate/activate/delete Harvester Node Driver With Rancher 2.6: Cluster Management &gt; Drivers &gt; Node Drivers Deactivate/activate/delete Harvester Node Driver Expected Results Harvester icon is not visible when creating a cluster / Harvester icon is visible when creating a cluster /Harvester icon is not visible when creating a cluster + <p>With Rancher &lt; 2.6:</p> <ol> <li>Tools-&gt;Driver Management→Node Driver</li> <li>Deactivate/activate/delete Harvester Node Driver With Rancher 2.6:</li> <li>Cluster Management &gt; Drivers &gt; Node Drivers</li> <li>Deactivate/activate/delete Harvester Node Driver</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester icon is not visible when creating a cluster / Harvester icon is visible when creating a cluster /Harvester icon is not visible when creating a cluster</li> </ol> Dedicated storage network https://harvester.github.io/tests/manual/_incoming/1055_dedicated_storage_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1055_dedicated_storage_network/ - Ref: https://github.com/harvester/harvester/issues/1055 Verified this feature has been implemented partially. Mentioned problem in https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519 will be introduced as a enhancement in #2995 Test Information Environment: baremetal DL360G9 5 nodes Harvester Version: master-bd1d49a9-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Navigate to Networks -&gt; Cluster Networks/Configs, create Cluster Network named vlan Navigate to Advanced -&gt; Settings, edit storage-network Select Enable then select vlan as cluster network, fill in VLAN ID and IP Range Click Save, warning or error message should displayed. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1055">https://github.com/harvester/harvester/issues/1055</a></p> <p>Verified this feature has been implemented partially. Mentioned problem in <a href="https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519">https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519</a> will be introduced as a enhancement in #2995</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>baremetal DL360G9 5 nodes</strong></li> <li>Harvester Version: <strong>master-bd1d49a9-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan</code></li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Click Save, warning or error message should displayed.</li> <li>edit <code>storage-network</code> again, <code>mgmt</code> should not in the drop-down list of <code>Cluster Network</code></li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan2</code></li> <li>Create <code>Network Config</code> for all nodes</li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan2</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, delete Cluster Network <code>vlan2</code></li> <li>Warning or error message should displayed</li> </ol> Delete 3 node RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/1311-delete-3-node-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1311-delete-3-node-rke2-cluster/ - Related issues: #1311 Deleting a cluster in rancher dashboard doesn&rsquo;t fully remove Verification Steps Create 3 node RKE2 cluster on Harvester through node driver with Rancher Wait fo the nodes to create, but not fully provision Delete the cluster Wait for them to be removed from Harvester Check Rancher cluster management Expected Results Cluster should be removed from Rancher VMs should be removed from Harvester + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1311">#1311</a> Deleting a cluster in rancher dashboard doesn&rsquo;t fully remove</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create 3 node RKE2 cluster on Harvester through node driver with Rancher</li> <li>Wait fo the nodes to create, but not fully provision</li> <li>Delete the cluster</li> <li>Wait for them to be removed from Harvester</li> <li>Check Rancher cluster management</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Cluster should be removed from Rancher</li> <li>VMs should be removed from Harvester</li> </ol> Delete backup from backups list (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-single-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-single-backup/ - Delete backup from backups list Expected Results Backup should be removed from list Backup should be removed from remote storage + <ol> <li>Delete backup from backups list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be removed from list</li> <li>Backup should be removed from remote storage</li> </ol> Delete Cluster https://harvester.github.io/tests/manual/node-driver/cluster-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-delete/ - Delete Cluster Expected Results successful cluster deletion in rancher the corresponding VM node in harvester is deleted successfully + <ol> <li>Delete Cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>successful cluster deletion in rancher</li> <li>the corresponding VM node in harvester is deleted successfully</li> </ol> Delete external VLAN network via form https://harvester.github.io/tests/manual/network/delete-vlan-network-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-vlan-network-form/ - On a VM with both an external VLAN and a management VLAN delete the external VLAN via the web form Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the external VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the external VLAN via the web form</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the external VLAN</li> <li>You should get responses from the VM</li> </ol> Delete external VLAN network via YAML (e2e_be) https://harvester.github.io/tests/manual/network/delete-vlan-network-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-vlan-network-yaml/ - On a VM with both an external VLAN and a management VLAN delete the external VLAN via YAML Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the external VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the external VLAN via YAML</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the external VLAN</li> <li>You should get responses from the VM</li> </ol> Delete first backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-first-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-first-backup-chained-backup/ - Create a new VM Create a file named 1 and add text Create a backup Edit text in file 1 create file 2 Create Backup Edit file 2 text Create file 3 and add text Create backup Delete backup 1 Validate file 2 and 3 are the same as they were Restore to backup 2 Validate that md5sum -c file1-2.md5 file2.md5 file3.md5 file 1 is in second format file 2 is in first format file 3 doesn&rsquo;t exist Expected Results Vm should create All file operations should create Backup should run All file operations should create Backup should run All file operations should create files should be as expected + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add text</li> <li>Create a backup</li> <li>Edit text in file 1</li> <li>create file 2</li> <li>Create Backup</li> <li>Edit file 2 text</li> <li>Create file 3 and add text</li> <li>Create backup</li> <li>Delete backup 1</li> <li>Validate file 2 and 3 are the same as they were</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5</code></li> <li>file 1 is in second format</li> <li>file 2 is in first format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete Host (e2e_be) https://harvester.github.io/tests/manual/hosts/delete-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete-host/ - Navigate to the Hosts page and select the node Click Delete Expected Results SSH to the node and check the nodes has components deleted. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH to the node and check the nodes has components deleted.</li> </ol> Delete host that has VMs on it https://harvester.github.io/tests/manual/hosts/delete-host-with-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete-host-with-vm/ - Navigate to the Hosts page and select the node Click Delete Expected Results An alert message should appear. If VM exists it should stop user to delete the node or move VM to other node. If VM is getting moved to another node and there is no space, it should stop user to delete the node. Existing bugs https://github.com/harvester/harvester/issues/1004 + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>An alert message should appear.</li> <li>If VM exists it should stop user to delete the node or move VM to other node.</li> <li>If VM is getting moved to another node and there is no space, it should stop user to delete the node.</li> </ol> <h3 id="existing-bugs">Existing bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1004">https://github.com/harvester/harvester/issues/1004</a></p> Delete last backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-last-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-last-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup delete backup 3 Validate that files didn&rsquo;t change Restore to backup 2 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>delete backup 3</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5 </code></li> <li>file 1 is in second format</li> <li>file 2 is in original format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete management network via form https://harvester.github.io/tests/manual/network/delete-management-network-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-management-network-form/ - On a VM with both an external VLAN and a management VLAN delete the management VLAN via the web form Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the management VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the management VLAN via the web form</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the management VLAN</li> <li>You should get responses from the VM</li> </ol> Delete management network via YAML (e2e_be) https://harvester.github.io/tests/manual/network/delete-management-network-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-management-network-yaml/ - On a VM with both an external VLAN and a management VLAN delete the management network via YAML Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the management network You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the management network via YAML</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the management network</li> <li>You should get responses from the VM</li> </ol> Delete middle backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-middle-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-middle-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Delete backup 2 Validate file 2 and 3 are the same as they were Restore to backup 1 Validate that md5sum -c file1. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Delete backup 2</li> <li>Validate file 2 and 3 are the same as they were</li> <li>Restore to backup 1</li> <li>Validate that <ul> <li><code>md5sum -c file1.md5 file2.md5 file3.md5 </code></li> <li>file 1 is in original format - md5sum-1</li> <li>file 2 doesn&rsquo;t exist</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> <li>Validate data by restoring other backups also.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete multiple backups https://harvester.github.io/tests/manual/backup-and-restore/delete-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-multiple-backups/ - Select multiple Backups from Backups list Click Delete Expected Results Backups should be removed from list Backups should be removed from remote storage + <ol> <li>Select multiple Backups from Backups list</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backups should be removed from list</li> <li>Backups should be removed from remote storage</li> </ol> Delete multiple VMs with disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-with-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-with-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete multiple VMs without disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-without-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-without-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete single vm all disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-single-vm-all-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-single-vm-all-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete the image (e2e_be_fe) https://harvester.github.io/tests/manual/images/delete-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/delete-image/ - Select an image with state active. Delete the image. Create another image with same name. Delete the newly created image. Delete an image with failed state Expected Results The image should be deleted successfully. Check the CRDS VirtualMachineImage. User should be able to create a new image with same name. Check the backing image in Longhorn. + <ol> <li>Select an image with state active.</li> <li>Delete the image.</li> <li>Create another image with same name.</li> <li>Delete the newly created image.</li> <li>Delete an image with failed state</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The image should be deleted successfully. Check the CRDS VirtualMachineImage.</li> <li>User should be able to create a new image with same name.</li> <li>Check the backing image in Longhorn.</li> </ol> Delete VM Negative (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/negative-delete-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-delete-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Delete VM and all disks Expected Results You should not be able to delete the VM + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Delete VM and all disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to delete the VM</li> </ol> Delete VM template default version (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2376-2379-delete-vm-template-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2376-2379-delete-vm-template-default-version/ - Related issues: #2376 [BUG] Cannot delete Template Related issues: #2379 [backport v1.0.3] Cannot delete Template Category: VM Template Verification Steps Go to Advanced -&gt; Templates Create a new template Modify the template to create a new version Click the config button of the default version template Click the config button of the non default version template Expected Results If the template is the default version, it will not display the delete button If the template is not the default version, it will display the delete button We can also delete the entire template from the config button + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2376">#2376</a> [BUG] Cannot delete Template</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2379">#2379</a> [backport v1.0.3] Cannot delete Template</li> </ul> <h2 id="category">Category:</h2> <ul> <li>VM Template</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to Advanced -&gt; Templates</li> <li>Create a new template</li> <li>Modify the template to create a new version</li> <li>Click the config button of the <code>default version</code> template</li> <li>Click the config button of the <code>non default version</code> template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>If the template is the <code>default version</code>, it will not display the <code>delete</code> button <img src="https://user-images.githubusercontent.com/29251855/174030567-b2c6ae52-40d1-4dd6-9ede-783409bd3c87.png" alt="image"></p> Delete VM with exported image (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/1602-delete-vm-with-exported-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1602-delete-vm-with-exported-image/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; delete image &ldquo;img-1&rdquo; Expected Results image &ldquo;img-1&rdquo; will be deleted + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>delete image &ldquo;img-1&rdquo;</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be deleted</li> </ol> Delete VM with exported image(e2e_fe) https://harvester.github.io/tests/manual/images/1602-delete-vm-with-exported-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/1602-delete-vm-with-exported-image/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; delete image &ldquo;img-1&rdquo; Expected Results image &ldquo;img-1&rdquo; will be deleted + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>delete image &ldquo;img-1&rdquo;</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be deleted</li> </ol> Delete volume that is not attached to a VM (e2e_be_fe) https://harvester.github.io/tests/manual/volumes/delete-volume-that-is-not-attached-to-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/delete-volume-that-is-not-attached-to-vm/ - Create volume Validate that it created Check the volume crd. Delete the volume Verify that volume is removed from list Check the volume object doesn&rsquo;t exist anymore. Expected Results Volume should create It should show in volume list Volume crd should have correct info. Volume should delete. Volume should be removed from list + <ol> <li>Create volume</li> <li>Validate that it created</li> <li>Check the volume crd.</li> <li>Delete the volume</li> <li>Verify that volume is removed from list</li> <li>Check the volume object doesn&rsquo;t exist anymore.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should create</li> <li>It should show in volume list</li> <li>Volume crd should have correct info.</li> <li>Volume should delete.</li> <li>Volume should be removed from list</li> </ol> Delete volume that was attached to VM but now is not (e2e_be_fe) https://harvester.github.io/tests/manual/volumes/delete-volume-that-was-attached-to-vm-but-is-not-now/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/delete-volume-that-was-attached-to-vm-but-is-not-now/ - Create a VM with a root volume Write 10Gi data into it. Delete the VM but not the volume Verify Volume still exists Check disk space on node Delete the volume Verify that volume is removed from list Check disk space on node Expected Results VM should create 10Gi space should be consumed on the disk. VM should delete Volume should still show in Volume list Disk space should show 10Gi + Volume should delete Volume should be removed from list Space should be less than before + <ol> <li>Create a VM with a root volume</li> <li>Write 10Gi data into it.</li> <li>Delete the VM but not the volume</li> <li>Verify Volume still exists</li> <li>Check disk space on node</li> <li>Delete the volume</li> <li>Verify that volume is removed from list</li> <li>Check disk space on node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>10Gi space should be consumed on the disk.</li> <li>VM should delete</li> <li>Volume should still show in Volume list</li> <li>Disk space should show 10Gi +</li> <li>Volume should delete</li> <li>Volume should be removed from list</li> <li>Space should be less than before</li> </ol> Deny the vlanconfigs overlap with the other https://harvester.github.io/tests/manual/_incoming/2828-deny-vlanconfig-overlap-others/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2828-deny-vlanconfig-overlap-others/ - Related issues: #2828 [BUG][FEATURE] Deny the vlanconfigs overlap with the other Category: Network Verification Steps Prepare a 3 nodes Harvester on local kvm Each VM have five NICs attached. Create a cluster network cn1 Create a vlanconfig config-all which applied to all nodes Set one of the NIC On the same cluster network, create another vlan network config-one which applied to only node 1 Provide another NIC Click the create button Expected Results Under the same Cluster Network: + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2828">#2828</a> [BUG][FEATURE] Deny the vlanconfigs overlap with the other</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a 3 nodes Harvester on local kvm</li> <li>Each VM have five NICs attached.</li> <li>Create a cluster network <code>cn1</code></li> <li>Create a vlanconfig <code>config-all</code> which applied to <code>all nodes</code> <img src="https://user-images.githubusercontent.com/29251855/196409238-dd1a5d9f-bf00-46cd-93b2-c9469bf7c58a.png" alt="image"></li> <li>Set one of the NIC <img src="https://user-images.githubusercontent.com/29251855/196409451-5279f4e5-e66a-4960-8889-cc1c186acfdc.png" alt="image"></li> <li>On the same cluster network, create another vlan network <code>config-one</code> which applied to only <code>node 1</code> <img src="https://user-images.githubusercontent.com/29251855/196409565-67e2e418-1efc-4c50-a016-7fea4dd582a3.png" alt="image"></li> <li>Provide another NIC <img src="https://user-images.githubusercontent.com/29251855/196409613-e214183d-b665-453e-8fa8-246f21a11243.png" alt="image"></li> <li>Click the create button</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Under the same Cluster Network:</p> Deploy guest cluster to specific node with Node selector label https://harvester.github.io/tests/manual/_incoming/2316-2384-deploy-guest-cluster-node-selector-label-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2316-2384-deploy-guest-cluster-node-selector-label-copy/ - Related issues: #2316 [BUG] Guest cluster nodes distributed across failure domain Related issues: #2384 [backport v1.0.3] Guest cluster nodes distributed across failure domains Category: Rancher integration Verification Steps RKE2 Verification Steps Open Harvester Host page then edit host config Add the following key value in the labels page: topology.kubernetes.io/zone: zone_bp topology.kubernetes.io/region: region_bp Open the RKE2 provisioning page Expand the show advanced Click add Node selector in Node scheduling Use default Required priority Click Add Rule Provide the following key/value pairs topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2316">#2316</a> [BUG] Guest cluster nodes distributed across failure domain</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2384">#2384</a> [backport v1.0.3] Guest cluster nodes distributed across failure domains</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="rke2-verification-steps">RKE2 Verification Steps</h3> <ol> <li>Open Harvester Host page then edit host config</li> <li>Add the following key value in the labels page: <ul> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp <img src="https://user-images.githubusercontent.com/29251855/179735384-77e99870-92ad-41c2-b414-a872130c0b27.png" alt="image"></li> </ul> </li> <li>Open the RKE2 provisioning page</li> <li>Expand the show advanced</li> <li>Click add Node selector in <code>Node scheduling</code></li> <li>Use default <code>Required</code> priority</li> <li>Click Add Rule</li> <li>Provide the following key/value pairs <ul> <li><code>topology.kubernetes.io/zone: zone_bp</code></li> <li><code>topology.kubernetes.io/region: region_bp</code> <img src="https://user-images.githubusercontent.com/29251855/179736419-78612fd1-9990-44d8-b9be-d9a850bd27a0.png" alt="image"></li> </ul> </li> <li>Provide the following user data <pre tabindex="0"><code>password: 123456 chpasswd: { expire: False } ssh_pwauth: True </code></pr Detach volume from virtual machine (e2e_fe) https://harvester.github.io/tests/manual/volumes/detach-volume-from-virtual-machine/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/detach-volume-from-virtual-machine/ - Related issues: #1708 After click &ldquo;Detach volume&rdquo; button, nothing happend Category: Volume Verification Steps Create several new volume in volumes page Create a virtual machine Click the config button on the selected virtual machine Click Add volume and add at least two new volume Click the Detach volume button on the attached volume Repeat above steps several times Expected Results Currently when click the Detach volume button, attached volume can be detach successfully. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1708">#1708</a> After click &ldquo;Detach volume&rdquo; button, nothing happend</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create several new volume in volumes page <img src="https://user-images.githubusercontent.com/29251855/146900871-50fad5fa-2d25-4559-b10b-55e276d7edb8.png" alt="image"></li> <li>Create a virtual machine</li> <li>Click the config button on the selected virtual machine</li> <li>Click Add volume and add at least two new volume <img src="https://user-images.githubusercontent.com/29251855/146901117-dac73494-d8fd-4e1c-9a74-eed76fc14511.png" alt="image"></li> <li>Click the <code>Detach volume</code> button on the attached volume <img src="https://user-images.githubusercontent.com/29251855/146901585-51df212b-5443-4961-b648-6db265c272c2.png" alt="image"></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/146901235-6607a936-884b-41d9-94e2-372e8c028334.png" alt="image"></p> Disable and enable vlan cluster network https://harvester.github.io/tests/manual/network/disable-and-enable-vlan-cluster-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/disable-and-enable-vlan-cluster-network/ - Related issues: #1529 Failed to enable vlan cluster network after disable and enable again, display &ldquo;Network Error&rdquo; Category: Network Verification Steps Open settings and config vlan network Enable network and set default harvester-mgmt Disable network Enable network again Check Host, Network and harvester dashboard Repeat above steps several times Expected Results User can disable and enable network with default harvester-mgmt. Harvester dashboard and network work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1529">#1529</a> Failed to enable vlan cluster network after disable and enable again, display &ldquo;Network Error&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open settings and config vlan network</li> <li>Enable network and set default harvester-mgmt</li> <li>Disable network</li> <li>Enable network again</li> <li>Check Host, Network and harvester dashboard</li> <li>Repeat above steps several times</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User can disable and enable network with default <code>harvester-mgmt</code>.</li> <li>Harvester dashboard and network work as expected</li> </ol> Disk can only be added once on UI https://harvester.github.io/tests/manual/hosts/add_disk_on_ui/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/add_disk_on_ui/ - Ref: https://github.com/harvester/harvester/issues/1608 Verify Items NVMe disk can only be added once on UI Case: add new NVMe disk on dashboard UI Install Harvester with 2 nodes Power off 2nd node Update VM&rsquo;s xml definition (by using virsh edit or virt-manager) Create nvme.img block: dd if=/dev/zero of=/var/lib/libvirt/images/nvme.img bs=1M count=4096 change owner chown qemu:qemu /var/lib/libvirt/images/nvme.img update &lt;domain type=&quot;kvm&quot;&gt; to &lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt; append xml node into domain as below: &lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/var/lib/libvirt/images/nvme. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1608">https://github.com/harvester/harvester/issues/1608</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>NVMe disk can only be added once on UI</li> </ul> <h2 id="case-add-new-nvme-disk-on-dashboard-ui">Case: add new NVMe disk on dashboard UI</h2> <ol> <li>Install Harvester with 2 nodes</li> <li>Power off 2nd node</li> <li>Update VM&rsquo;s xml definition (by using <code>virsh edit</code> or virt-manager) <ul> <li>Create <strong>nvme.img</strong> block: <code>dd if=/dev/zero of=/var/lib/libvirt/images/nvme.img bs=1M count=4096</code></li> <li>change owner <code>chown qemu:qemu /var/lib/libvirt/images/nvme.img</code></li> <li>update <code>&lt;domain type=&quot;kvm&quot;&gt;</code> to <code>&lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt;</code></li> <li>append xml node into <strong>domain</strong> as below:</li> </ul> </li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:commandline&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-drive&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;file=/var/lib/libvirt/images/nvme.img,if=none,id=D22,format=raw&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-device&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;nvme,drive=D22,serial=1234&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/qemu:commandline&gt;</span> </span></span></code></pr Disk devices used for VM storage should be globally configurable https://harvester.github.io/tests/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/ - Related issue: #1241 Disk devices used for VM storage should be globally configurable Related issue: #1382 Exclude OS root disk and partitions on forced GPT partition Related issue: #1599 Extra disk auto provision from installation may cause NDM can&rsquo;t find a valid longhorn node to provision Category: Storage Test Scenarios (Checked means verification PASS) BIOS firmware + No MBR (Default) + Auto disk` provisioning config BIOS firmware + MBR + Auto disk provisioning config UEFI firmware + GPT (Default) + Auto disk provisioning config BIOS firmware + GPT (Default) +Auto Provisioning on harvester-config Environment setup Scenario 1: Node type: Create + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1241">#1241</a> Disk devices used for VM storage should be globally configurable</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1382">#1382</a> Exclude OS root disk and partitions on forced GPT partition</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1599">#1599</a> Extra disk auto provision from installation may cause NDM can&rsquo;t find a valid longhorn node to provision</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="test-scenarios">Test Scenarios</h2> <p>(Checked means verification <code>PASS</code>)</p> Download backing images https://harvester.github.io/tests/manual/_incoming/1436__allowing_users_to_download_backing_images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1436__allowing_users_to_download_backing_images/ - Ref: https://github.com/harvester/harvester/issues/1436 Verify Steps: Install Harvester with any nodes Create a Image img1 Click the details of img1, Download Button should be available Click Download button, img1 should able to be downloaded and downloaded successfully. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1436">https://github.com/harvester/harvester/issues/1436</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/189675005-a7509189-f0c3-42e4-b5a4-d8c1bc1f6341.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create a Image <code>img1</code></li> <li>Click the details of <code>img1</code>, <strong>Download</strong> Button should be available</li> <li>Click <strong>Download</strong> button, <code>img1</code> should able to be downloaded and downloaded successfully.</li> </ol> Download host YAML https://harvester.github.io/tests/manual/hosts/download-host-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/download-host-yaml/ - Navigate to the Hosts page and select the node Click Download Yaml Expected Results The Yaml should get downloaded. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Download Yaml</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The Yaml should get downloaded.</li> </ol> Download kubeconfig after shutting down harvester cluster https://harvester.github.io/tests/manual/misc/download-kubeconfig-after-shutting-down-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/download-kubeconfig-after-shutting-down-harvester-cluster/ - Related issues: #1475 After shutting down the cluster the kubeconfig becomes invalid Category: Host Verification Steps Shutdown harvester node 3, wait for fully power off Shutdown harvester node 2, wait for fully power off Shutdown harvester node 1, wait for fully power off Wait for more than hours or over night Power on node 1 to console page until you see management url Power on node 2 to console page until you see management url + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1475">#1475</a> After shutting down the cluster the kubeconfig becomes invalid</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Shutdown harvester node 3, wait for fully power off</p> </li> <li> <p>Shutdown harvester node 2, wait for fully power off</p> </li> <li> <p>Shutdown harvester node 1, wait for fully power off</p> </li> <li> <p>Wait for more than hours or over night</p> </li> <li> <p>Power on node 1 to console page until you see management url <img src="https://user-images.githubusercontent.com/29251855/145156486-60507643-8a96-4b4a-862d-367c41665e6b.png" alt="image"></p> Edit a VM and add install Enable usb tablet option (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-enable-usb-tablet-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-enable-usb-tablet-option/ - Add Enable usb tablet Option Save/Create VM Expected Results Machine starts successfully Enable usb tablet shows In YAML In Form + <ol> <li>Add Enable usb tablet Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Enable usb tablet shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> </ol> Edit a VM and add install guest agent option (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-guest-agent-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-guest-agent-option/ - Add install Guest Agent Option Save/Create VM Expected Results Machine starts successfully Guest Agent Option shows In YAML In Form Guest Agent is installed + <ol> <li>Add install Guest Agent Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Guest Agent Option shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Guest Agent is installed</li> </ol> Edit a VM from the form to add Network Data https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-network-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-network-data/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0</li> </ul> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Edit a VM from the form to add user data (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-user-data/ - Add User data to the VM Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should In YAML In Form Machine should have user password set + <ol> <li> <p>Add User data to the VM</p> <ul> <li>Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True</li> </ul> <pre tabindex="0"><code></code></pre></li> <li> <p>Save/Create the VM</p> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Edit a VM from the YAML to add Network Data (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-network-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-network-data/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0</li> </ul> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Edit a VM from the YAML to add user data (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-user-data/ - Add User data to the VM Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should In YAML In Form Machine should have user password set + <ol> <li>Add User data to the VM <ul> <li>Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True</li> </ul> <pre tabindex="0"><code></code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Edit an existing VM to another machine type (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-an-existing-vm-to-another-machine-type/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-an-existing-vm-to-another-machine-type/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Edit backup read YAML from file https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-read-yaml-from-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-read-yaml-from-file/ - Edit YAML for backup Read from File Show Diff Save Expected Results Diff should show changes Backup should be updated + <ol> <li>Edit YAML for backup</li> <li>Read from File</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Diff should show changes</li> <li>Backup should be updated</li> </ol> Edit backup via YAML (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-yaml/ - Edit YAML for backup Show Diff Save Expected Results Diff should show changes Backup should be updated + <ol> <li>Edit YAML for backup</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Diff should show changes</li> <li>Backup should be updated</li> </ol> Edit Config (e2e_be) https://harvester.github.io/tests/manual/hosts/edit-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/edit-config/ - Navigate to the Hosts page and select the node Click edit config. Add description and other details Try to modify the network config Expected Results The edited values should be saved and reflected on the page. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click edit config.</li> <li>Add description and other details</li> <li>Try to modify the network config</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited values should be saved and reflected on the page.</li> </ol> Edit Config YAML (e2e_be) https://harvester.github.io/tests/manual/hosts/edit-config-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/edit-config-yaml/ - Navigate to the Hosts page and select the node Click edit config through YAML. Add description and other details Try to modify the network config Expected Results The edited values should be saved and reflected on the page. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click edit config through YAML.</li> <li>Add description and other details</li> <li>Try to modify the network config</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited values should be saved and reflected on the page.</li> </ol> Edit images (e2e_be_fe) https://harvester.github.io/tests/manual/images/edit-images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/edit-images/ - Edit image. Try to edit the description Try to edit the URL Try to edit the Labels Expected Results User should be able to edit the description and Labels User should not be able to edit the URL + <ol> <li>Edit image. <ul> <li>Try to edit the description</li> <li>Try to edit the URL</li> <li>Try to edit the Labels</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to edit the description and Labels</li> <li>User should not be able to edit the URL</li> </ol> Edit network via form change external VLAN to management network https://harvester.github.io/tests/manual/network/edit-network-form-change-vlan-to-management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-form-change-vlan-to-management/ - Edit VM and change external VLAN to management network with bridge type via the web form Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change external VLAN to management network with bridge type via the web form</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via form change management network to external VLAN https://harvester.github.io/tests/manual/network/edit-network-form-change-management-to-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-form-change-management-to-vlan/ - Edit VM and change management network to external VLAN with bridge type via the web form Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change management network to external VLAN with bridge type via the web form</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via YAML change external VLAN to management network (e2e_be) https://harvester.github.io/tests/manual/network/edit-network-yaml-change-vlan-to-management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-yaml-change-vlan-to-management/ - Edit VM and change external VLAN to management network with bridge type via YAML Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change external VLAN to management network with bridge type via YAML</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via YAML change management network to external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/edit-network-yaml-change-management-to-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-yaml-change-management-to-vlan/ - Edit VM and change management network to external VLAN with bridge type via YAML Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change management network to external VLAN with bridge type via YAML</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit vm and insert ssh and check the ssh key is accepted for the login (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-and-insert-ssh-and-check-the-ssh-key-is-accepted-for-the-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-and-insert-ssh-and-check-the-ssh-key-is-accepted-for-the-login/ - Edit VM and add SSH Key Save VM Expected Results You should be able to ssh in with correct SSH private key You should not be able to SSH in with incorrect SSH private key + <ol> <li>Edit VM and add SSH Key</li> <li>Save VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to ssh in with correct SSH private key</li> <li>You should not be able to SSH in with incorrect SSH private key</li> </ol> Edit vm config after Eject CDROM and delete volume https://harvester.github.io/tests/manual/virtual-machines/5264-edit-vm-config-after-eject-cdrom-delete-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/5264-edit-vm-config-after-eject-cdrom-delete-volume/ - Related issues: #5264 [BUG] After EjectCD from vm and edit config of vm displays empty page: &ldquo;Cannot read properties of null&rdquo; Category: Virtual Machines Verification Steps Upload the ISO type desktop image (e.g ubuntu-20.04.4-desktop-amd64.iso) Create a vm named vm1 with the iso image Open the web console to check content Click EjectCD after vm running Select the delete volume option Wait until vm restart to running Click the edit config Back to the virtual machine page Click the vm1 name Expected Results Check can edit vm config of vm1 to display all settings correctly Check can display the current vm1 settings correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5264">#5264</a> [BUG] After EjectCD from vm and edit config of vm displays empty page: &ldquo;Cannot read properties of null&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machines</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload the ISO type desktop image (e.g ubuntu-20.04.4-desktop-amd64.iso)</li> <li>Create a vm named <code>vm1</code> with the iso image</li> <li>Open the web console to check content</li> <li>Click EjectCD after vm running</li> <li>Select the <code>delete volume</code> option</li> <li>Wait until vm restart to running</li> <li>Click the edit config</li> <li>Back to the virtual machine page</li> <li>Click the <code>vm1</code> name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Check can edit vm config of <code>vm1</code> to display all settings correctly <img src="https://harvester.github.io/tests/images/virtual-machines/5264-edit-vm-cofig-after-delete-volume.png" alt="images/virtual-machines/5264-edit-vm-cofig-after-delete-volume.png"></li> <li>Check can display the current <code>vm1</code> settings correctly</li> </ul> Edit VM Form Negative https://harvester.github.io/tests/manual/virtual-machines/negative-edit-vm-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-edit-vm-form/ - In a multi-node setup disconnect/shutdown the node where the VM is running Edit the VM via form Save the VM Expected Results You should not be able to save the edited Form You should get an error + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Edit the VM via form</li> <li>Save the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to save the edited Form</li> <li>You should get an error</li> </ol> Edit vm network and verify the network is working as per configuration (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-network-and-verify-the-network-is-working-as-per-configuration-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-network-and-verify-the-network-is-working-as-per-configuration-/ - Edit VM network Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML Network should function as desired + <ol> <li>Edit VM network</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> </ul> </li> <li>Network should function as desired</li> </ol> Edit VM via form with CPU https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via form with CPU and Memory https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu-and-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via form with Memory https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with CPU (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with CPU and Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu-and-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM with resources that are only on one node in cluster CPU and Memory https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Edit VM YAML Negative https://harvester.github.io/tests/manual/virtual-machines/q-negative-edit-vm-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/q-negative-edit-vm-yaml/ - In a multi-node setup disconnect/shutdown the node where the VM is running Edit the VM via YAML Save the VM Expected Results SSH to the node and check the nodes has components deleted. + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Edit the VM via YAML</li> <li>Save the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH to the node and check the nodes has components deleted.</li> </ol> Edit Volume Form add label https://harvester.github.io/tests/manual/volumes/edit-volume-form-add-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-form-add-label/ - Navigate to volumes page Edit Volume with Form Click Labels Add label Click Save Open VM again and click the config tab Verify that label was saved Expected Results Volume should save Label should add Label should show when re-opened + <ol> <li>Navigate to volumes page</li> <li>Edit Volume with Form</li> <li>Click Labels</li> <li>Add label</li> <li>Click Save</li> <li>Open VM again and click the config tab</li> <li>Verify that label was saved</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should save</li> <li>Label should add</li> <li>Label should show when re-opened</li> </ol> Edit volume increase size via form (e2e_fe) https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-form/ - Stop the vm Navigate to volumes page Edit Volume via form Increase size Click Save Connect to VM via console Check size of root disk Expected Results VM should stop VM should reboot after saving Disk should be resized + <ol> <li>Stop the vm</li> <li>Navigate to volumes page</li> <li>Edit Volume via form</li> <li>Increase size</li> <li>Click Save</li> <li>Connect to VM via console</li> <li>Check size of root disk</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should stop</li> <li>VM should reboot after saving</li> <li>Disk should be resized</li> </ol> Edit volume increase size via YAML (e2e_be) https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-yaml/ - Stop the vm Navigate to volumes page Edit Volume as YAML Increase size Click Save Connect to VM via console Check size of root disk Expected Results VM should stop VM should reboot after saving Disk should be resized + <ol> <li>Stop the vm</li> <li>Navigate to volumes page</li> <li>Edit Volume as YAML</li> <li>Increase size</li> <li>Click Save</li> <li>Connect to VM via console</li> <li>Check size of root disk</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should stop</li> <li>VM should reboot after saving</li> <li>Disk should be resized</li> </ol> Edit Volume YAML add label (e2e_be) https://harvester.github.io/tests/manual/volumes/edit-volume-yaml-add-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-yaml-add-label/ - Navigate to volumes page Edit Volume as YAML Add label to config Click Save Open VM again and click the config tab Verify that label was saved Expected Results Volume should save Label should add Label should show when re-opened + <ol> <li>Navigate to volumes page</li> <li>Edit Volume as YAML</li> <li>Add label to config</li> <li>Click Save</li> <li>Open VM again and click the config tab</li> <li>Verify that label was saved</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should save</li> <li>Label should add</li> <li>Label should show when re-opened</li> </ol> Enable Harvester addons and check deployment state https://harvester.github.io/tests/manual/advanced/addons/5337-enable-addons-check-deployment/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/5337-enable-addons-check-deployment/ - Related issues: #5337 [BUG] Failed to enable vm-importer, pcidevices and harvester-seeder controller addons, keep stuck in &ldquo;Enabling&rdquo; state Category: Addons Verification Steps Prepare three nodes Harvester cluster Open Advanced -&gt; Addons page Access to harvester node machine Switch to root user and open k9s Enable the vm-importer, pci-devices and harvester-seeder addons Check the corresponding jobs and logs Enable rest of the addons nvidia-driver-toolkit, rancher-monitoring and rancher-logging Expected Results Check the vm-importer, pci-devices and harvester-seeder display in Deployment Successful Check the vm-importer-controller, pci-devices-controller and harvester-seeder jobs and the related helm-install chart job all running well on the K9s Check the nvidia-driver-toolkit, rancher-monitoring and rancher-logging display in Deployment Successful Check the nvidia-driver-toolkit, rancher-monitoring and rancher-logging jobs and the related helm-install chart job all running well on the K9s + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5337">#5337</a> [BUG] Failed to enable vm-importer, pcidevices and harvester-seeder controller addons, keep stuck in &ldquo;Enabling&rdquo; state</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Addons</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare three nodes Harvester cluster</li> <li>Open Advanced -&gt; <code>Addons</code> page</li> <li>Access to harvester node machine</li> <li>Switch to root user and open k9s</li> <li>Enable the <code>vm-importer</code>, <code>pci-devices</code> and <code>harvester-seeder</code> addons</li> <li>Check the corresponding jobs and logs</li> <li>Enable rest of the addons <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Check the <code>vm-importer</code>, <code>pci-devices</code> and <code>harvester-seeder</code> display in <code>Deployment Successful</code></li> <li>Check the <code>vm-importer-controller</code>, <code>pci-devices-controller</code> and <code>harvester-seeder</code> jobs and the related helm-install chart job all running well on the K9s</li> <li>Check the <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code> display in <code>Deployment Successful</code> <img src="https://harvester.github.io/tests/images/addons/5337-enable-all-addons.png" alt="images/addons/5337-enable-all-addons.png"></li> <li>Check the <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code> jobs and the related helm-install chart job all running well on the K9s</li> </ul> enable/disable alertmanager on demand https://harvester.github.io/tests/manual/_incoming/2518_enabledisable_alertmanager_on_demand/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2518_enabledisable_alertmanager_on_demand/ - Ref: https://github.com/harvester/harvester/issues/2518 Verify Steps: Install Harvester with any nodes Login to Dashboard, navigate to Monitoring &amp; Logging/Monitoring/Configuration then select Alertmanager tab Option Button Enabled should be checked Select Grafana tab then access Grafana Search Alertmanager to access Overview dashboard Data should be available and keep updating + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2518">https://github.com/harvester/harvester/issues/2518</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/193554680-c2d6f7c0-5cf0-44ee-803e-c7abda408774.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/193554761-1f28c3b9-8964-4bfa-8069-d5bcc7d8d837.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard, navigate to <strong>Monitoring &amp; Logging/Monitoring/Configuration</strong> then select <strong>Alertmanager</strong> tab</li> <li>Option Button <code>Enabled</code> should be checked</li> <li>Select <strong>Grafana</strong> tab then access Grafana</li> <li>Search <em>Alertmanager</em> to access <em>Overview</em> dashboard</li> <li>Data should be available and keep updating</li> </ol> Enabling and Tuning KSM https://harvester.github.io/tests/manual/_incoming/2302_enabling_and_tuning_ksm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2302_enabling_and_tuning_ksm/ - Ref: https://github.com/harvester/harvester/issues/2302 Verify Steps: Install Harvester with any nodes Login to Dashboard and Navigate to hosts Edit node1&rsquo;s Ksmtuned to Run and ThresCoef to 85 then Click Save Login to node1&rsquo;s console, execute kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt; Fields in spec should be the same as Dashboard configured Create an image for VM creation Create multiple VMs with 2Gi+ memory and schedule on &lt;node1&gt; (memory size reflect to &rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%) Execute watch -n1 grep . + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2302">https://github.com/harvester/harvester/issues/2302</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard and Navigate to hosts</li> <li>Edit <em>node1</em>&rsquo;s <strong>Ksmtuned</strong> to <code>Run</code> and <strong>ThresCoef</strong> to <code>85</code> then Click <strong>Save</strong></li> <li>Login to <em>node1</em>&rsquo;s console, execute <code>kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt;</code></li> <li>Fields in <code>spec</code> should be the same as Dashboard configured</li> <li>Create an image for VM creation</li> <li>Create multiple VMs with 2Gi+ memory and schedule on <code>&lt;node1&gt;</code> (memory size reflect to <!-- raw HTML omitted -->&rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%)</li> <li>Execute <code>watch -n1 grep . /sys/kernel/mm/ksm/*</code> to monitor ksm&rsquo;s status change <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>1</code> after VMs started</li> <li><code>/sys/kernel/mm/ksm/page_*</code> should updating continuously</li> </ul> </li> <li>Login to Dashboard then navigate to <em>Hosts</em>, click <!-- raw HTML omitted --></li> <li>In the Tab of <strong>Ksmtuned</strong>, values in Statistics section should not be <code>0</code>. (data in this section will be updated per min, so it not equals to console&rsquo;s output was expected.)</li> <li>Stop all VMs scheduling to <code>&lt;node1&gt;</code>, the monitor data <code>/sys/kernel/mm/ksm/run</code> should be update to <code>0</code> (this is expected as it is designed to dynamically spawn ksm up when <code>ThresCoef</code> hits)</li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to <code>Run: Prune</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>2</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be update to <code>0</code></li> </ul> </li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to <code>Run: Stop</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>0</code></li> </ul> </li> </ol> Enabling vlan on a bonded NIC on vagrant install https://harvester.github.io/tests/manual/network/enabling-vlan-on-bonded-nic-vagrant-install/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/enabling-vlan-on-bonded-nic-vagrant-install/ - Related issues: #1541 Enabling vlan on a bonded NIC breaks the Harvester setup Category: Network Verification Steps Pull ipxe example from https://github.com/harvester/ipxe-examples Vagrant pxe install 3 nodes harvester Access harvester settings page Open settings -&gt; vlan Enable virtual network and set with bond0 Navigate to every page to check harvester is working Create a vlan based on bon0 Expected Results Enable virtual network with bond0 will not make harvester service out of work. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1541">#1541</a> Enabling vlan on a bonded NIC breaks the Harvester setup</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Pull ipxe example from <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Vagrant pxe install 3 nodes harvester</li> <li>Access harvester settings page</li> <li>Open <code>settings</code> -&gt; <code>vlan</code></li> <li>Enable virtual network and set with <code>bond0</code></li> <li>Navigate to every page to check harvester is working</li> <li>Create a vlan based on <code>bon0</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Enable virtual network with <code>bond0</code> will not make harvester service out of work. <img src="https://user-images.githubusercontent.com/29251855/143804059-f8fc0bee-b42a-4daa-b0bb-438b64b75db2.png" alt="image"></p> enhance double check of VM's resource modification https://harvester.github.io/tests/manual/_incoming/2869_enhance_double_check_of_vms_resource_modification/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2869_enhance_double_check_of_vms_resource_modification/ - Ref: https://github.com/harvester/harvester/issues/2869 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create VM vm1 Imitate video recording (as below) to test https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2869">https://github.com/harvester/harvester/issues/2869</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create VM <code>vm1</code></li> <li>Imitate video recording (as below) to test</li> </ol> <p><a href="https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4">https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4</a></p> enhance node scheduling when vm selects network https://harvester.github.io/tests/manual/_incoming/2982_enhance_node_scheduling_when_vm_selects_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2982_enhance_node_scheduling_when_vm_selects_network/ - Ref: https://github.com/harvester/harvester/issues/2982 Criteria Scheduling rule added automatically when select specific network Verify Steps: go to Cluster Networks / Config page, create a new Cluster Network (eg: test) Create a new network config in the test Cluster Network. (Select a specific node) go to Network page to create a new network (e.g: test-untagged), select UntaggedNetwork type and select test cluster network. click Create button go to VM create page, fill all required value, Click Networks tab, select default/test-untagged network, click Create button The VM is successfully created, but the scheduled node may not match the Network Config ! + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2982">https://github.com/harvester/harvester/issues/2982</a></p> <h3 id="criteria">Criteria</h3> <p>Scheduling rule added automatically when select specific network <img src="https://user-images.githubusercontent.com/5169694/197729616-a6fcda2e-42ba-469f-b6c1-9c297bef1a45.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>go to <code>Cluster Networks / Config</code> page, create a new Cluster Network (eg: test)</li> <li>Create a new <code>network config</code> in the <code>test</code> Cluster Network. (Select a specific node) <img src="https://images.zenhubusercontent.com/60345555ec1db310c78aa2b8/431ba9b2-56e7-48af-bf4d-6e0ba964ebd3" alt="image.png"></li> <li>go to <code>Network</code> page</li> <li>to create a new network (e.g: <code>test-untagged</code>), select <code>UntaggedNetwork</code> type and select <code>test</code> cluster network. click <code>Create</code> button</li> <li>go to VM create page, fill all required value, Click <code>Networks</code> tab, select <code>default/test-untagged</code> network, click <code>Create</code> button</li> <li>The VM is successfully created, but the scheduled node may not match the Network Config ![image.png]</li> </ol> Filter backups https://harvester.github.io/tests/manual/backup-and-restore/filter-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/filter-backups/ - Enter in string in filter input field Columns available for matching: State &ldquo;Ready&rdquo; &ldquo;Progressing&rdquo; Name Target VM With string With matching string Input Clear With non-matching string Input Clear Clear String Expected Results List should filter based on string List should re-populate after clearing string + <ol> <li>Enter in string in filter input field <ul> <li>Columns available for matching: <ul> <li>State <ul> <li>&ldquo;Ready&rdquo;</li> <li>&ldquo;Progressing&rdquo;</li> </ul> </li> <li>Name</li> <li>Target VM</li> </ul> </li> <li>With string <ul> <li>With matching string <ul> <li>Input</li> <li>Clear</li> </ul> </li> <li>With non-matching string <ul> <li>Input Clear</li> <li>Clear String</li> </ul> </li> </ul> </li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>List should filter based on string</li> <li>List should re-populate after clearing string</li> </ol> First Time Login (e2e_fe) https://harvester.github.io/tests/manual/authentication/first-time-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/first-time-login/ - After successful installation of Harvester using Iso, on navigating to UI, user should be prompted to change the password. Verify the password rules Expected Results User should be able to login + <ol> <li>After successful installation of Harvester using Iso, on navigating to UI, user should be prompted to change the password.</li> <li>Verify the password rules</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to login</li> </ol> Fleet support with Harvester https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ - Fleet Support Pathways Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function Fleet Support can be used from within Rancher w/ Harvester Fleet Support w/ Rancher Prerequisites Harvester cluster is imported into Rancher. Rancher Feature Flag harvester-baremetal-container-workload is enabled. Harvester cluster is available to view via the Explore Cluster section of Rancher. Explore the Harvester cluster: Toggle &ldquo;All Namespaces&rdquo; to be selected Search for &amp; &ldquo;star&rdquo; (marking favorite for ease of navigation): Git Repo Git Job Git Restrictions Fleet Support w/out Rancher Prerequisites An active Harvester Cluster Kubeconfig Additional Prerequisites Fork ibrokethecloud&rsquo;s Harvester Fleet Demo into your own personal GitHub Repository Take a look at the different Harvester API Resources as YAML will be scaffolded to reflect those objects respectively Additional Prerequisites Airgapped, if desired Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of ibrokethecloud&rsquo;s Harvester Fleet Demo (setting up AirGapped GitLab Server is outside of this scope) Additional Prerequisites (Private Repository Testing), if desired Private Git Repo Key, will need to be added to -n fleet-local namespace Build a private GitHub Repo Add similar content to what ibrokethecloud&rsquo;s Harvester Fleet Demo holds but take into consideration the following ( references: GitRepo CRD &amp; Rancher Fleet Private Git Repo Blurb ): building a &ldquo;separate&rdquo; SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like: ssh-keygen -t rsa -b 4096 -m pem -C &#34;testing-test-key-for-private-repo-deploy-key@email. + <h2 id="fleet-support-pathways">Fleet Support Pathways</h2> <ol> <li>Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function</li> <li>Fleet Support can be used from within Rancher w/ Harvester</li> </ol> <h3 id="fleet-support-w-rancher-prerequisites">Fleet Support w/ Rancher Prerequisites</h3> <ol> <li>Harvester cluster is imported into Rancher.</li> <li>Rancher Feature Flag <code>harvester-baremetal-container-workload</code> is enabled.</li> <li>Harvester cluster is available to view via the Explore Cluster section of Rancher.</li> <li>Explore the Harvester cluster: <ol> <li>Toggle &ldquo;All Namespaces&rdquo; to be selected</li> <li>Search for &amp; &ldquo;star&rdquo; (marking favorite for ease of navigation): <ul> <li>Git Repo</li> <li>Git Job</li> <li>Git Restrictions</li> </ul> </li> </ol> </li> </ol> <h3 id="fleet-support-wout-rancher-prerequisites">Fleet Support w/out Rancher Prerequisites</h3> <ol> <li>An active Harvester Cluster Kubeconfig</li> </ol> <h3 id="additional-prerequisites">Additional Prerequisites</h3> <ol> <li>Fork <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> into your own personal GitHub Repository</li> <li>Take a look at the different <a href="https://docs.harvesterhci.io/v1.2/category/api">Harvester API Resources</a> as YAML will be scaffolded to reflect those objects respectively</li> </ol> <h3 id="additional-prerequisites-airgapped-if-desired">Additional Prerequisites Airgapped, if desired</h3> <ol> <li>Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> (setting up AirGapped GitLab Server is outside of this scope)</li> </ol> <h3 id="additional-prerequisites-private-repository-testing-if-desired">Additional Prerequisites (Private Repository Testing), if desired</h3> <ol> <li><a href="https://fleet.rancher.io/gitrepo-add#adding-private-git-repository">Private Git Repo Key</a>, will need to be added to <code>-n fleet-local</code> namespace</li> <li>Build a private GitHub Repo</li> <li>Add similar content to what <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> holds but take into consideration the following ( references: <a href="https://fleet.rancher.io/ref-gitrepo">GitRepo CRD</a> &amp; <a href="https://fleet.rancher.io/gitrepo-add#adding-private-git-repository">Rancher Fleet Private Git Repo Blurb</a> ): <ol> <li>building a &ldquo;separate&rdquo; SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like:</li> </ol> <pre tabindex="0"><code> ssh-keygen -t rsa -b 4096 -m pem -C &#34;testing-test-key-for-private-repo-deploy-key@email.com&#34; Generating public/private rsa key pair. Enter file in which to save the key (/home/mike/.ssh/id_rsa): /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing Your public key has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing.pub </code></pr Function keys on web VNC interface https://harvester.github.io/tests/manual/_incoming/1461-function-keys-on-web-vnc-interface/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1461-function-keys-on-web-vnc-interface/ - Related issues: #1461 [UI] F keys and Alt-F keys in web VNC interface Category: Network Verification Steps Create a new VM with Ubuntu desktop 20.04 Prepare two volume Complete the installation process Open a web browser on Ubuntu desktop Check the shortcut keys combination Expected Results Check the soft shortcut keys can display and work correctly on Linux OS VM (Ubuntu desktop 20.04) Checked the following short cut can work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1461">#1461</a> [UI] F keys and Alt-F keys in web VNC interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new VM with Ubuntu desktop 20.04</li> <li>Prepare two volume</li> <li>Complete the installation process</li> <li>Open a web browser on Ubuntu desktop</li> <li>Check the shortcut keys combination</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check the soft shortcut keys can display and work correctly on Linux OS VM (Ubuntu desktop 20.04) <img src="https://user-images.githubusercontent.com/29251855/177092853-0a9d570e-39b1-4127-ac22-2b9508d5b4f6.png" alt="image"></p> Generate Install Support Config Bundle For Single Node https://harvester.github.io/tests/manual/_incoming/1864-generate-install-support-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1864-generate-install-support-config/ - Related issue: #1864 Support bundle for a single node (Live/Installed) Related issue: #272 Generate supportconfig for failed installations Category: Support Environment setup Setup a single node harvester from ISO install but don&rsquo;t complete the installation Gain SSH Access to the Single Harvester Node Once Shelled into the Single Harvester Node edit the /usr/sbin/harv-install Using: harvester-installer&rsquo;s harv-install as a reference edit around line #362 adding exit 1: exit 1 trap cleanup exit check_iso save the file. + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1864">#1864</a> Support bundle for a single node (Live/Installed)</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester-installer/pull/272">#272</a> Generate supportconfig for failed installations</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Support</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup a single node harvester from ISO install but don&rsquo;t complete the installation</p> <ol> <li>Gain SSH Access to the Single Harvester Node</li> <li>Once Shelled into the Single Harvester Node edit the <code>/usr/sbin/harv-install</code></li> <li>Using: <a href="https://github.com/harvester/harvester-installer/blob/master/package/harvester-os/files/usr/sbin/harv-install#L362">harvester-installer&rsquo;s harv-install as a reference</a> edit around line #362 adding <code>exit 1</code>:</li> </ol> <pre tabindex="0"><code>exit 1 trap cleanup exit check_iso </code></pr Guest CSI Driver https://harvester.github.io/tests/manual/node-driver/guest-csi-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/guest-csi-driver/ - Start rancher using docker in a vm and start harvester in another Import harvester into rancher from &ldquo;Virtualization Management&rdquo; page On rancher, enable harvester node driver at &ldquo;Cluster Management&rdquo; -&gt; &ldquo;Drivers&rdquo; -&gt; &ldquo;Node Driver&rdquo; Go back to &ldquo;Cluster Management&rdquo; and create a rke2 cluster using Harvester Once the created cluster is active on the &ldquo;Cluster Management&rdquo; page, click on the &ldquo;Explore&rdquo; Go to &ldquo;Workload&rdquo; -&gt; &ldquo;Deployment&rdquo; and &ldquo;Create&rdquo; a new deployment, during which in the page of &ldquo;Storage&rdquo;, click on &ldquo;Add Volume&rdquo; and select &ldquo;Create Persistent Volume Claim&rdquo; and select &ldquo;Harvester&rdquo; in the &ldquo;Storage Class&rdquo; Click &ldquo;Create&rdquo; to create the deployment Verify that on the Harvester side, a new volume is created. + <ol> <li>Start rancher using docker in a vm and start harvester in another</li> <li>Import harvester into rancher from &ldquo;Virtualization Management&rdquo; page</li> <li>On rancher, enable harvester node driver at &ldquo;Cluster Management&rdquo; -&gt; &ldquo;Drivers&rdquo; -&gt; &ldquo;Node Driver&rdquo;</li> <li>Go back to &ldquo;Cluster Management&rdquo; and create a rke2 cluster using Harvester</li> <li>Once the created cluster is active on the &ldquo;Cluster Management&rdquo; page, click on the &ldquo;Explore&rdquo;</li> <li>Go to &ldquo;Workload&rdquo; -&gt; &ldquo;Deployment&rdquo; and &ldquo;Create&rdquo; a new deployment, during which in the page of &ldquo;Storage&rdquo;, click on &ldquo;Add Volume&rdquo; and select &ldquo;Create Persistent Volume Claim&rdquo; and select &ldquo;Harvester&rdquo; in the &ldquo;Storage Class&rdquo;</li> <li>Click &ldquo;Create&rdquo; to create the deployment</li> <li>Verify that on the Harvester side, a new volume is created.</li> <li>Delete the created deployment and then delete the created pvc. Verify on the harvester side that the newly created volume is also deleted. create another deployment, say nginx:latest with 8GB storage created as step 6.</li> <li>&ldquo;Execute shell&rdquo; into the deployment above and use &ldquo;dd&rdquo; command to test the read &amp; write speed in the directory where the pvc is mounted: <ul> <li><code>dd if=/dev/zero of=tempfile bs=1M count=5120</code></li> <li><code>dd if=/dev/null of=tempfile bs=1M count=5120</code></li> </ul> </li> <li>SSH into a VM created on the bare metal and run the same <code>dd</code> command <ul> <li><code>dd if=/dev/zero of=tempfile bs=1M count=5120</code></li> <li><code>dd if=/dev/null of=tempfile bs=1M count=5120</code></li> </ul> </li> <li>Scale down the above deployment to 0 replica and resize the pvc to 15GB on the harvester side:</li> <li>Double check the pvc is resized on the longhorn side.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The cluster should have similar storage speed performance</li> <li>The PVC should resize and show it in the Longhorn UI</li> </ol> Harvester Cloud Provider compatibility check https://harvester.github.io/tests/manual/_incoming/2753-harvester-cloud-provider-compatibility/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2753-harvester-cloud-provider-compatibility/ - Related issues: #2753 [FEATURE] Harvester Cloud Provider compatibility check enhancement Category: Rancher Integration Verification Steps Open Rancher Global settings Edit the rke-metadata-config Change the default url to https://harvester-dev.oss-cn-hangzhou.aliyuncs.com/Untitled-1.json which include the following cloud provider and csi-driver chart changes &#34;charts&#34;: { &#34;harvester-cloud-provider&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, &#34;harvester-csi-driver&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, Save and reload page Open the create RKE2 cluster page Select the incomparable RKE2 version Check the Cloud provider drop down Enable Harvester API in Preference -&gt; Enable Developer Tools &amp; Features Open settings Click view API of any setting Click up open the id&quot;: &ldquo;harvester-csi-ccm-versions&rdquo; Or directly access https://192. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2753">#2753</a> [FEATURE] Harvester Cloud Provider compatibility check enhancement</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Rancher Global settings</li> <li>Edit the <code>rke-metadata-config</code></li> <li>Change the default url to <code>https://harvester-dev.oss-cn-hangzhou.aliyuncs.com/Untitled-1.json</code> which include the following cloud provider and csi-driver chart changes <pre tabindex="0"><code>&#34;charts&#34;: { &#34;harvester-cloud-provider&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, &#34;harvester-csi-driver&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, </code></pr Harvester pull Rancher agent image from private registry https://harvester.github.io/tests/manual/_incoming/2175-2332-harvester-pull-rancher-image-private-registry/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2175-2332-harvester-pull-rancher-image-private-registry/ - Related issues: #2175 [BUG] Harvester fails to pull Rancher agent image from private registry Related issues: #2332 [Backport v1.0] Harvester fails to pull Rancher agent image from private registry Category: Virtual Machine Verification Steps Create a harvester cluster and a ubuntu server. Make sure they can reach each other. On each harvester node, add ubuntu IP to /etc/hosts. # vim /etc/hosts &lt;host ip&gt; myregistry.local On the ubuntu server, install docker and run the following commands. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2175">#2175</a> [BUG] Harvester fails to pull Rancher agent image from private registry</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2332">#2332</a> [Backport v1.0] Harvester fails to pull Rancher agent image from private registry</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster and a ubuntu server. Make sure they can reach each other.</li> <li>On each harvester node, add ubuntu IP to <code>/etc/hosts</code>.</li> </ol> <pre tabindex="0"><code># vim /etc/hosts &lt;host ip&gt; myregistry.local </code></pr Harvester rebase check on SLE Micro https://harvester.github.io/tests/manual/_incoming/1933-2420-harvester-rebase-check-on-sle-micro/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1933-2420-harvester-rebase-check-on-sle-micro/ - Related issues: #1933 [FEATURE] Rebase Harvester on SLE Micro for Rancher Related issues: #2420 [FEATURE] support bundle: support SLE Micro OS Category: System Verification Steps Download support bundle in support page Extract support bundle and check every file content Vagrant install master release Execute backend E2E regression test Run frontend Cypress automated test against feature Images, Networks, Virtual machines Run manual test against feature Volume, Live migration and Backup and rancher integration Expected Results Check can download support bundle correctly, check can access every file without empty + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1933">#1933</a> [FEATURE] Rebase Harvester on SLE Micro for Rancher</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2420">#2420</a> [FEATURE] support bundle: support SLE Micro OS</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>System</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download support bundle in support page</li> <li>Extract support bundle and check every file content</li> <li>Vagrant install master release</li> <li>Execute backend E2E regression test</li> <li>Run frontend Cypress automated test against feature Images, Networks, Virtual machines</li> <li>Run manual test against feature Volume, Live migration and Backup and rancher integration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check can download support bundle correctly, check can access every file without empty</p> Harvester supports event log https://harvester.github.io/tests/manual/_incoming/2748_harvester_supports_event_log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2748_harvester_supports_event_log/ - Ref: https://github.com/harvester/harvester/issues/2748 Verified this feature has been implemented. Test Information Environment: qemu/KVM 3 nodes Harvester Version: master-250f41e4-head ui-source Option: Auto Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Logging/Event Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Event Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2748">https://github.com/harvester/harvester/issues/2748</a></p> <p>Verified this feature has been implemented.</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 3 nodes</strong></li> <li>Harvester Version: <strong>master-250f41e4-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Logging/Event</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Event</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Harvester supports kube-audit log https://harvester.github.io/tests/manual/_incoming/2747_harvester_supports_kube-audit_log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2747_harvester_supports_kube-audit_log/ - Ref: https://github.com/harvester/harvester/issues/2747 Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Audit Only Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Audit Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2747">https://github.com/harvester/harvester/issues/2747</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Audit Only</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Audit</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Harvester uses active-backup as the default bond mode https://harvester.github.io/tests/manual/_incoming/2472_harvester_uses_active-backup_as_the_default_bond_mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2472_harvester_uses_active-backup_as_the_default_bond_mode/ - Ref: https://github.com/harvester/harvester/issues/2472 Verify Steps: Install Harvester via ISO The default Bond Mode should select active-backup Ater installed with active-backup mode, login to console Execute cat /etc/sysconfig/network/ifcfg-harvester-mgmt, BONDING_MODULE_OPTS should contains mode=active-backup + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2472">https://github.com/harvester/harvester/issues/2472</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/184838334-a723f066-8eef-4cbc-ab66-6e02b758823d.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/184839241-3702fa7c-950e-4b51-8c18-d29d4121f848.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester via ISO</li> <li>The default <strong>Bond Mode</strong> should select <code>active-backup</code></li> <li>Ater installed with <code>active-backup</code> mode, login to console</li> <li>Execute <code>cat /etc/sysconfig/network/ifcfg-harvester-mgmt</code>, <strong>BONDING_MODULE_OPTS</strong> should contains <code>mode=active-backup</code></li> </ol> Host list should display the disk error message on failure https://harvester.github.io/tests/manual/hosts/host-list-should-display-disk-error-message-on-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/host-list-should-display-disk-error-message-on-failure/ - Related issue: #1167 Host list should display the disk error message on table Category: Storage Verification Steps Shutdown existing node vm machine Run &ldquo;qemu-img create&rdquo; command to make a nvme.img Edit quem/kvm xml setting to attach the nvme image Start VM Open hostpage and edit your target node config Add the new nvme disk Shutdown VM Remove the attach device setting in VM xml file Start VM Open Host page, the targe node will show warning with unready and unscheduable disk exists Expected Results If host encounter disk ready or schedule failure, on host page the &ldquo;disk state&rdquo; will show warning With a hover tip &ldquo;Host have unready or unschedulable disks&rdquo; Can create load balancer correctly with health check setting + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1167">#1167</a> Host list should display the disk error message on table</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Shutdown existing node vm machine</li> <li>Run &ldquo;qemu-img create&rdquo; command to make a nvme.img</li> <li>Edit quem/kvm xml setting to attach the nvme image</li> <li>Start VM</li> <li>Open hostpage and edit your target node config</li> <li>Add the new nvme disk</li> <li>Shutdown VM</li> <li>Remove the attach device setting in VM xml file</li> <li>Start VM</li> <li>Open Host page, the targe node will show warning with unready and unscheduable disk exists</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>If host encounter disk ready or schedule failure, on host page the &ldquo;disk state&rdquo; will show <strong>warning</strong> With a hover tip &ldquo;<strong>Host have unready or unschedulable disks&rdquo;</strong></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/138687164-877422a0-d33b-4e26-9c0b-d52b8f4e6995.png" alt="image"></p> Http proxy setting on harvester https://harvester.github.io/tests/manual/deployment/http-proxy-setting-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/http-proxy-setting-harvester/ - Related issue: #1218 Missing http proxy settings on rke2 and rancher pod Related issue: #1012 Failed to create image when deployed in private network environment Category: Network Environment setup Setup an airgapped harvester Clone ipxe example repository https://github.com/harvester/ipxe-examples Edit the setting.xml file under vagrant ipxe example Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Verification Steps Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1218">#1218</a> Missing http proxy settings on rke2 and rancher pod</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1012">#1012</a> Failed to create image when deployed in private network environment</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Clone ipxe example repository <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit the <code>setting.xml</code> file under vagrant ipxe example</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr Image filtering by labels https://harvester.github.io/tests/manual/_incoming/2319-image-filtering-by-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2319-image-filtering-by-labels/ - Related issues: #2319 [FEATURE] Image filtering by labels Category: Image Verification Steps Upload several images and add related label Go to the image list page Add filter according to test plan 1 Go to VM creation page Check the image list and search by name Import Harvester in Rancher Go to cluster management page Create a RKE2 cluster Check the image list and search by name Expected Results Test Result 1: The image list page can be filtered by label in the following cases + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2319">#2319</a> [FEATURE] Image filtering by labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload several images and add related label</li> <li>Go to the image list page</li> <li>Add filter according to test plan 1</li> <li>Go to VM creation page</li> <li>Check the image list and search by name</li> <li>Import Harvester in Rancher</li> <li>Go to cluster management page</li> <li>Create a RKE2 cluster</li> <li>Check the image list and search by name</li> </ol> <h2 id="expected-results">Expected Results</h2> <h4 id="test-result-1">Test Result 1:</h4> <p>The image list page can be filtered by label in the following cases</p> Image filtering by labels (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2474-image-filtering-by-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2474-image-filtering-by-labels/ - Related issues: #2474 [backport v1.0] [FEATURE] Image filtering by labels Category: Image Verification Steps Upload several images and add related label Go to the image list page Add filter according to test plan 1 Go to VM creation page Check the image list and search by name Import Harvester in Rancher Go to cluster management page Create a RKE2 cluster Check the image list and search by name Expected Results The image list page can be filtered by label in the following cases + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2474">#2474</a> [backport v1.0] [FEATURE] Image filtering by labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload several images and add related label</li> <li>Go to the image list page</li> <li>Add filter according to test plan 1</li> <li>Go to VM creation page</li> <li>Check the image list and search by name</li> <li>Import Harvester in Rancher</li> <li>Go to cluster management page</li> <li>Create a RKE2 cluster</li> <li>Check the image list and search by name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The image list page can be filtered by label in the following cases</p> Image handling consistency between terraform data resource and Harvester UI created image https://harvester.github.io/tests/manual/_incoming/2443-image-consistency-terraform-data-harvester-ui/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2443-image-consistency-terraform-data-harvester-ui/ - Related issues: #2443 [BUG] Image handling inconsistency between &ldquo;Harvester Terraform harvester_image data source&rdquo; vs. &ldquo;UI created Image&rdquo; Category: Terraform Verification Steps Download latest terraform-provider terraform-provider-harvester_0.5.1_linux_amd64.zip Extra the zip file Create the install-terraform-provider-harvester.sh with the following content #!/usr/bin/env bash [[ -n $DEBUG ]] &amp;&amp; set -x set -eou pipefail usage() { cat &lt;&lt;HELP USAGE: install-terraform-provider-harvester.sh HELP } version=0.5.1 arch=linux_amd64 terraform_harvester_provider_bin=./terraform-provider-harvester terraform_harvester_provider_dir=&#34;${HOME}/.terraform.d/plugins/registry.terraform.io/harvester/harvester/${version}/${arch}/&#34; mkdir -p &#34;${terraform_harvester_provider_dir}&#34; cp ${terraform_harvester_provider_bin} &#34;${terraform_harvester_provider_dir}/terraform-provider-harvester_v${version}&#34; Rename the extraced terraform-provider-harvester_v0. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2443">#2443</a> [BUG] Image handling inconsistency between &ldquo;Harvester Terraform harvester_image data source&rdquo; vs. &ldquo;UI created Image&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraform</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Download latest terraform-provider <a href="https://github.com/harvester/terraform-provider-harvester/releases/download/v0.5.1/terraform-provider-harvester_0.5.1_linux_amd64.zip">terraform-provider-harvester_0.5.1_linux_amd64.zip</a></p> </li> <li> <p>Extra the zip file</p> </li> <li> <p>Create the install-terraform-provider-harvester.sh with the following content</p> <pre tabindex="0"><code>#!/usr/bin/env bash [[ -n $DEBUG ]] &amp;&amp; set -x set -eou pipefail usage() { cat &lt;&lt;HELP USAGE: install-terraform-provider-harvester.sh HELP } version=0.5.1 arch=linux_amd64 terraform_harvester_provider_bin=./terraform-provider-harvester terraform_harvester_provider_dir=&#34;${HOME}/.terraform.d/plugins/registry.terraform.io/harvester/harvester/${version}/${arch}/&#34; mkdir -p &#34;${terraform_harvester_provider_dir}&#34; cp ${terraform_harvester_provider_bin} &#34;${terraform_harvester_provider_dir}/terraform-provider-harvester_v${version}&#34; </code></pr Image naming with inline CSS (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2563-image-naming-inline-css/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2563-image-naming-inline-css/ - Related issues: #2563 [[BUG] harvesterhci.io.virtualmachineimage spec.displayName displays differently in single view of image Category: Images Verification Steps Go to images Click &ldquo;Create&rdquo; Upload an image or leverage an url - but name the image something like: &lt;strong&gt;&lt;em&gt;something_interesting&lt;/em&gt;&lt;/strong&gt; Wait for upload to complete. Observe the display name within the list of images Compare that to clicking into the single image and viewing it Expected Results The list view naming would be the same as the single view of the image + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2563">#2563</a> [[BUG] harvesterhci.io.virtualmachineimage spec.displayName displays differently in single view of image</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to images</li> <li>Click &ldquo;Create&rdquo;</li> <li>Upload an image or leverage an url - but name the image something like: <code>&lt;strong&gt;&lt;em&gt;something_interesting&lt;/em&gt;&lt;/strong&gt;</code></li> <li>Wait for upload to complete.</li> <li>Observe the display name within the list of images</li> <li>Compare that to clicking into the single image and viewing it</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The list view naming would be the same as the single view of the image</li> </ol> Image upload does not start when HTTP Proxy is configured https://harvester.github.io/tests/manual/_incoming/2436-2524-image-upload-failed-when-http-proxy-configured/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2436-2524-image-upload-failed-when-http-proxy-configured/ - Related issues: #2436 [BUG] Image upload does not start when HTTP Proxy is configured Related issues: #2524 [backport v1.0] [BUG] Image upload does not start when HTTP Proxy is configured Category: Image Verification Steps Clone ipxe-example vagrant project https://github.com/harvester/ipxe-examples Edit settings.yml Set harvester_network_config.offline=true Create a one node air gapped Harvester with a HTTP proxy server Access Harvester settings page Add the following http proxy configuration { &#34;httpProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;httpsProxy&#34;: &#34;http://192. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2436">#2436</a> [BUG] Image upload does not start when HTTP Proxy is configured</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2524">#2524</a> [backport v1.0] [BUG] Image upload does not start when HTTP Proxy is configured</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Clone ipxe-example vagrant project <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit settings.yml</li> <li>Set <code>harvester_network_config.offline=true</code></li> <li>Create a one node air gapped Harvester with a HTTP proxy server</li> <li>Access Harvester settings page</li> <li>Add the following http proxy configuration</li> </ol> <pre tabindex="0"><code>{ &#34;httpProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;httpsProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;noProxy&#34;: &#34;localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.0.0/16,.svc,.cluster.local,example.com&#34; } </code></pr Import and make changes to clusternetwork resource https://harvester.github.io/tests/manual/terraformer/import-edit-clusternetwork/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-clusternetwork/ - Import clusternetwork resource terraformer import harvester -r clusternetwork Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: default_physical_nic, enable in the clusternetwork.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r clusternetwork 2021/08/04 15:43:25 harvester importing. + <ol> <li>Import clusternetwork resource</li> </ol> <pre tabindex="0"><code>terraformer import harvester -r clusternetwork </code></pre><ol> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: default_physical_nic, enable in the clusternetwork.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r clusternetwork 2021/08/04 15:43:25 harvester importing... clusternetwork 2021/08/04 15:43:26 harvester done importing clusternetwork ... </code></pr Import and make changes to image resource https://harvester.github.io/tests/manual/terraformer/import-edit-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-image/ - Import image resource terraformer import harvester -r image Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: description, display_name, name, namespace and url in the image.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r image 2021/08/04 16:14:52 harvester importing. + <ol> <li>Import image resource <code>terraformer import harvester -r image</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: description, display_name, name, namespace and url in the image.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r image 2021/08/04 16:14:52 harvester importing... image 2021/08/04 16:14:52 harvester done importing image ... </code></pr Import and make changes to network resource https://harvester.github.io/tests/manual/terraformer/import-edit-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-network/ - Import network resource terraformer import harvester -r network Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and vlan_id in the network.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r network 2021/08/04 16:14:08 harvester importing. + <ol> <li>Import network resource <code>terraformer import harvester -r network</code></li> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and vlan_id in the network.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r network 2021/08/04 16:14:08 harvester importing... network 2021/08/04 16:14:08 harvester done importing network ... </code></pr Import and make changes to ssh_key resource https://harvester.github.io/tests/manual/terraformer/import-edit-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-ssh-key/ - Import ssh_key resource terraformer import harvester -r ssh_key Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and public_key in the ssh_key.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r ssh_key 2021/08/04 16:14:36 harvester importing. + <ol> <li>Import ssh_key resource <code>terraformer import harvester -r ssh_key</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply</li> <li>For instance, alter the following properties: name, namespace and public_key in the ssh_key.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r ssh_key 2021/08/04 16:14:36 harvester importing... ssh_key 2021/08/04 16:14:37 harvester done importing ssh_key ... </code></pr Import and make changes to virtual machine resource https://harvester.github.io/tests/manual/terraformer/import-edit-virtual-machine/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-virtual-machine/ - Import virtual machine resource terraformer import harvester -r virtualmachine Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: cpu, memory, name in the virtualmachine.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r virtualmachine 2021/08/04 16:15:08 harvester importing. + <ol> <li>Import virtual machine resource <code>terraformer import harvester -r virtualmachine</code></li> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply</li> <li>For instance, alter the following properties: cpu, memory, name in the virtualmachine.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r virtualmachine 2021/08/04 16:15:08 harvester importing... virtualmachine 2021/08/04 16:15:09 harvester done importing virtualmachine ... </code></pr Import and make changes to volume resource https://harvester.github.io/tests/manual/terraformer/import-edit-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-volume/ - Import volume resource terraformer import harvester -r volume Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace in the volume.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r volume 2021/08/04 16:15:29 harvester importing. + <ol> <li>Import volume resource <code>terraformer import harvester -r volume</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace in the volume.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r volume 2021/08/04 16:15:29 harvester importing... volume 2021/08/04 16:15:29 harvester done importing volume ... </code></pr Import External Harvester https://harvester.github.io/tests/manual/node-driver/import-external-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/import-external-harvester/ - With Rancher &lt; 2.6: Deploy the rancher and harvester clusters separately In the rancher, add a harvester node template Select &ldquo;External Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings. Use this template to create the corresponding cluster With Rancher 2.6: Home page / Import Existing / Generic Add cluster name and click on Create Follow the registration steps Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access External Harvester Host: Port: 443 Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15. + <p>With Rancher &lt; 2.6:</p> <ol> <li>Deploy the rancher and harvester clusters separately</li> <li>In the rancher, add a harvester node template</li> <li>Select &ldquo;External Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings.</li> <li>Use this template to create the corresponding cluster With Rancher 2.6:</li> <li>Home page / Import Existing / Generic</li> <li>Add cluster name and click on Create</li> <li>Follow the registration steps</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>External Harvester</li> <li>Host: <!-- raw HTML omitted --> Port: 443</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Import internal harvester https://harvester.github.io/tests/manual/node-driver/import-internal-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/import-internal-harvester/ - enable harvester&rsquo;s rancher-enabled setting Click the rancher button in the upper right corner to access the internal rancher add a harvester node template Select &ldquo;Internal Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15. + <ol> <li>enable harvester&rsquo;s rancher-enabled setting</li> <li>Click the rancher button in the upper right corner to access the internal rancher</li> <li>add a harvester node template</li> <li>Select &ldquo;Internal Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Improved resource reservation https://harvester.github.io/tests/manual/_incoming/2347_improved_resource_reservation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2347_improved_resource_reservation/ - Ref: https://github.com/harvester/harvester/issues/2347, https://github.com/harvester/harvester/issues/1700 Test Information Environment: Baremetal DL160G9 5 nodes Harvester Version: master-96b90714-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Login and Navigate to Hosts CPU/Memory/Storage should display Reserved and Used percentage. Navigate to Host&rsquo;s details Monitor Data should display Reserved and Used percentage, and should equals to the value in Hosts. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2347">https://github.com/harvester/harvester/issues/2347</a>, <a href="https://github.com/harvester/harvester/issues/1700">https://github.com/harvester/harvester/issues/1700</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/174753699-f65e66c6-677b-4a3a-8f71-bfbb7a3b1bb2.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/174754418-c5786f38-5909-40ce-8076-c3eddcd3059a.png" alt="image"></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>Baremetal DL160G9 5 nodes</strong></li> <li>Harvester Version: <strong>master-96b90714-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login and Navigate to Hosts</li> <li>CPU/Memory/Storage should display <strong>Reserved</strong> and <strong>Used</strong> percentage.</li> <li>Navigate to Host&rsquo;s details</li> <li>Monitor Data should display <strong>Reserved</strong> and <strong>Used</strong> percentage, and should equals to the value in Hosts.</li> </ol> Initiate multiple migrations at one time https://harvester.github.io/tests/manual/live-migration/initiate-multple-migrations-same-time/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/initiate-multple-migrations-same-time/ - Initiate live migration for a vm. While the live migration is in progress, initiate another migration Expected Results Both migration should work fine. The VMs should be accessible after the migration + <ol> <li>Initiate live migration for a vm.</li> <li>While the live migration is in progress, initiate another migration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Both migration should work fine.</li> <li>The VMs should be accessible after the migration</li> </ol> Install 2 node Harvester with a Harvester token with multiple words https://harvester.github.io/tests/manual/deployment/812-multiple-word-harvester-token/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/812-multiple-word-harvester-token/ - Related issues: #812 ISO install accepts multiple words for &lsquo;cluster token&rsquo; value resulting in failure to join cluster Verification Steps Start Harvester install from ISO At the &lsquo;Cluster token&rsquo; prompt, enter, here are words Proceed to complete the installation Boot a secondary host from the installation ISO and select the option to join an existing cluster At the &lsquo;Cluster token&rsquo; prompt, enter, here are words Proceed to complete the installation Verify both hosts show in hosts list at VIP Expected Results Install should complete successfully Host should add with no errors Both hosts should show up + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/812">#812</a> ISO install accepts multiple words for &lsquo;cluster token&rsquo; value resulting in failure to join cluster</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Start Harvester install from ISO</li> <li>At the &lsquo;Cluster token&rsquo; prompt, enter, <code>here are words</code></li> <li>Proceed to complete the installation</li> <li>Boot a secondary host from the installation ISO and select the option to join an existing cluster</li> <li>At the &lsquo;Cluster token&rsquo; prompt, enter, <code>here are words</code></li> <li>Proceed to complete the installation</li> <li>Verify both hosts show in hosts list at VIP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install should complete successfully</li> <li>Host should add with no errors</li> <li>Both hosts should show up</li> </ol> Install Harvester from USB disk https://harvester.github.io/tests/manual/deployment/install_via_usb/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_via_usb/ - Ref: https://github.com/harvester/harvester/issues/1200 Verify Items Harvester can be installed via USB stick Case: Install Harvester via USB disk Follow the instruction to create USB disk Harvester should able to be installed via the USB on UEFI-based bare metals + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1200">https://github.com/harvester/harvester/issues/1200</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Harvester can be installed via USB stick</li> </ul> <h2 id="case-install-harvester-via-usb-disk">Case: Install Harvester via USB disk</h2> <ol> <li>Follow <a href="https://docs.harvesterhci.io/v1.0/install/usb-install/">the instruction</a> to create USB disk</li> <li>Harvester should able to be installed via the USB on <strong>UEFI-based</strong> bare metals</li> </ol> Install Harvester on a bare Metal node using ISO image https://harvester.github.io/tests/manual/deployment/install-bare-metal-iso/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-bare-metal-iso/ - Install using ISO image Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/index/">Install using ISO image</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on a bare Metal node using PXE boot (e2e_be) https://harvester.github.io/tests/manual/deployment/install-bare-metal-pxe/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-bare-metal-pxe/ - Install Harvester using PXE boot Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/pxe-boot-install">Install Harvester using PXE boot</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on a virtual nested node using ISO image https://harvester.github.io/tests/manual/deployment/install-nested-virtualization/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-nested-virtualization/ - Install using ISO image Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/index/">Install using ISO image</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on NVMe SSD https://harvester.github.io/tests/manual/deployment/install_on_nvme/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_on_nvme/ - Ref: https://github.com/harvester/harvester/issues/1627 Verify Items Harvester can detect NVMe SSD when installing Harvester can be installed on NVMe SSD Case: Install Harvester on NVMe disk Create block image as NVMe disk Run dd if=/dev/zero of=/var/lib/libvirt/images/nvme145.img bs=1M count=148480 Then Change file owner chown qemu:qemu /var/lib/libvirt/images/nvme145.img Create VM via virt-manager Select Manual install, set Generic OS, Memory:9216, CPUs:8, Uncheck enable storage&hellip; and check customize configuration before install Select Firmware to use UEFI x86_64 (use usr/share/qemu/ovmf-x86_64-code. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1627">https://github.com/harvester/harvester/issues/1627</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Harvester can detect NVMe SSD when installing</li> <li>Harvester can be installed on NVMe SSD</li> </ul> <h2 id="case-install-harvester-on-nvme-disk">Case: Install Harvester on NVMe disk</h2> <ol> <li>Create block image as NVMe disk <ul> <li>Run <code>dd if=/dev/zero of=/var/lib/libvirt/images/nvme145.img bs=1M count=148480</code></li> <li>Then Change file owner <code>chown qemu:qemu /var/lib/libvirt/images/nvme145.img</code></li> </ul> </li> <li>Create VM via <em>virt-manager</em> <ul> <li>Select <em>Manual install</em>, set <strong>Generic OS</strong>, <code>Memory:9216</code>, <code>CPUs:8</code>, Uncheck <em><strong>enable storage&hellip;</strong></em> and check <strong>customize configuration before install</strong></li> <li>Select <em>Firmware</em> to use <strong>UEFI x86_64</strong> (use <code>usr/share/qemu/ovmf-x86_64-code.bin</code> in SUSE Leap 15.3)</li> <li>Select <em>Chipset</em> to use <strong>i440FX</strong></li> <li>Click <strong>Add Hardware</strong> to add CD-ROM including Harvester iso</li> <li>Update <strong>Boot Options</strong> to <strong>Enable boot menu</strong> and enable the CD-ROM</li> <li>edit XML with update <code>&lt;domain type=&quot;kvm&quot;&gt;</code> to <code>&lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt;</code></li> <li>append NVMe xml node into <strong>domain</strong>, then Begin Installation</li> </ul> </li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:commandline&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-drive&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;file=/var/lib/libvirt/images/nvme.img,if=none,id=D22,format=raw&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-device&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;nvme,drive=D22,serial=1234&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/qemu:commandline&gt;</span> </span></span></code></pr Install Harvester over previous GNU/Linux install https://harvester.github.io/tests/manual/_incoming/2230-2450-install-harvester-over-gnu-linux/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2230-2450-install-harvester-over-gnu-linux/ - Related issues: #2230 [BUG] harvester installer - always first attempt failed if before was linux installed Related issues: #2450 [backport v1.0][BUG] harvester installer - always first attempt failed if before was linux installed #2450 Category: Installtion Verification Steps Install GNU/LInux LVM configuration reboot Install Harvester via ISO over previous linux install Verifiy Harvester install by changing password and logging in. Expected Results Install should complete + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2230">#2230</a> [BUG] harvester installer - always first attempt failed if before was linux installed</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2450">#2450</a> [backport v1.0][BUG] harvester installer - always first attempt failed if before was linux installed #2450</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Installtion</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install GNU/LInux LVM configuration</li> <li>reboot</li> <li>Install Harvester via ISO over previous linux install</li> <li>Verifiy Harvester install by changing password and logging in.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install should complete</li> </ol> Install Option `HwAddr` for Network Interface https://harvester.github.io/tests/manual/deployment/hwaddr_configre_option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/hwaddr_configre_option/ - Ref: https://github.com/harvester/harvester/issues/1064 Verify Items Configure Option HwAddr is working on install configuration Case: Use HwAddr to install harvester via PXE Install Harvester with PXE installation, set hwAddr instead of name in install.networks Harvester should installed successfully + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1064">https://github.com/harvester/harvester/issues/1064</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Configure Option <code>HwAddr</code> is working on install configuration</li> </ul> <h3 id="case-use-hwaddr-to-install-harvester-via-pxe">Case: Use <code>HwAddr</code> to install harvester via PXE</h3> <ol> <li>Install Harvester with PXE installation, set <code>hwAddr</code> instead of <code>name</code> in <strong>install.networks</strong></li> <li>Harvester should installed successfully</li> </ol> Install Option `install.device` support symbolic link https://harvester.github.io/tests/manual/deployment/install_symblic_link/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_symblic_link/ - Ref: https://github.com/harvester/harvester/issues/1462 Verify Items Disk&rsquo;s symbolic link can be used in install configure option install.device Case: Harvester install with configure symbolic link on install.device Install Harvester with any nodes login to console, use ls -l /dev/disk/by-path to get disk&rsquo;s link name Re-install Harvester with configure file, with set the disk&rsquo;s link name instead. Harvester should be install successfully + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1462">https://github.com/harvester/harvester/issues/1462</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Disk&rsquo;s symbolic link can be used in install configure option <code>install.device</code></li> </ul> <h2 id="case-harvester-install-with-configure-symbolic-link-on-installdevice">Case: Harvester install with configure symbolic link on <code>install.device</code></h2> <ol> <li>Install Harvester with any nodes</li> <li>login to console, use <code>ls -l /dev/disk/by-path</code> to get disk&rsquo;s link name</li> <li>Re-install Harvester with configure file, with set the disk&rsquo;s link name instead.</li> <li>Harvester should be install successfully</li> </ol> Installation of the Harvester terraform provider (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/install-terraform-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/install-terraform-provider/ - Follow the instruction of the README Expected Results The provider is initialized and the terraform init command succeeds: Initializing provider plugins... - Finding harvester/harvester versions matching &#34;~&gt; 0.1.0&#34;... - Installing harvester/harvester v0.1.0... - Installed harvester/harvester v0.1.0 (unauthenticated) ... Terraform has been successfully initialized! + <p>Follow the instruction of the <a href="https://github.com/harvester/terraform-provider-harvester#install-the-provider">README</a></p> <h2 id="expected-results">Expected Results</h2> <p>The provider is initialized and the terraform init command succeeds:</p> <pre tabindex="0"><code>Initializing provider plugins... - Finding harvester/harvester versions matching &#34;~&gt; 0.1.0&#34;... - Installing harvester/harvester v0.1.0... - Installed harvester/harvester v0.1.0 (unauthenticated) ... Terraform has been successfully initialized! </code></pre> Instance metadata variables are not expanded https://harvester.github.io/tests/manual/_incoming/2342_instance_metadata_variables_are_not_expanded/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2342_instance_metadata_variables_are_not_expanded/ - Ref: https://github.com/harvester/harvester/issues/2342 Verify Steps: Install Harvester with any nodes Create Image for VM creation Create VM with following CloudConfig ## template: jinja #cloud-config package_update: true password: password chpasswd: { expire: False } sshpwauth: True write_files: - content: | #!/bin/bash vmName=$1 echo &#34;VM Name is: $vmName&#34; &gt; /home/cloudinitscript.log path: /home/exec_initscript.sh permissions: &#39;0755&#39; runcmd: - - systemctl - enable - --now - qemu-guest-agent.service - - echo - &#34;{{ ds.meta_data.local_hostname }}&#34; - - /home/exec_initscript. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2342">https://github.com/harvester/harvester/issues/2342</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177121301-f30bf8ec-0a70-4549-b11b-895161ee30ad.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create Image for VM creation</li> <li>Create VM with following <em>CloudConfig</em></li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e">## template: jinja</span> </span></span><span style="display:flex;"><span><span style="color:#75715e">#cloud-config</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">package_update</span>: <span style="color:#66d9ef">true</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">password</span>: <span style="color:#ae81ff">password</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">chpasswd</span>: { <span style="color:#f92672">expire</span>: <span style="color:#66d9ef">False</span> } </span></span><span style="display:flex;"><span><span style="color:#f92672">sshpwauth</span>: <span style="color:#66d9ef">True</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">write_files</span>: </span></span><span style="display:flex;"><span> - <span style="color:#f92672">content</span>: |<span style="color:#e6db74"> </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> #!/bin/bash </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> vmName=$1 </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> echo &#34;VM Name is: $vmName&#34; &gt; /home/cloudinitscript.log</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">path</span>: <span style="color:#ae81ff">/home/exec_initscript.sh</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">permissions</span>: <span style="color:#e6db74">&#39;0755&#39;</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">runcmd</span>: </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">systemctl</span> </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">enable</span> </span></span><span style="display:flex;"><span> - --<span style="color:#ae81ff">now</span> </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">qemu-guest-agent.service</span> </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">echo</span> </span></span><span style="display:flex;"><span> - <span style="color:#e6db74">&#34;{{ ds.meta_data.local_hostname }}&#34;</span> </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">/home/exec_initscript.sh</span> </span></span><span style="display:flex;"><span> - <span style="color:#e6db74">&#34;{{ ds.meta_data.local_hostname }}&#34;</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">packages</span>: </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">qemu-guest-agent</span> </span></span></code></pr ISO installation console UI Display https://harvester.github.io/tests/manual/_incoming/2402-iso-installation-console-ui-display/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2402-iso-installation-console-ui-display/ - Related issues: #2402 [FEATURE] Enhance the information display of ISO installation console UI (tty) Category: Harvester Installer Verification Steps ISO install a single node Harvester Monitoring the ISO installation console UI ISO install a three node Harvester cluster Monitoring the ISO installation console UI of the first node Monitoring the ISO installation console UI of the second node Monitoring the ISO installation console UI of the third node Expected Results The ISO installation console UI enhancement can display correctly under the following single and multiple nodes scenarios. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2402">#2402</a> [FEATURE] Enhance the information display of ISO installation console UI (tty)</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>ISO install a single node Harvester</li> <li>Monitoring the ISO installation console UI</li> <li>ISO install a three node Harvester cluster</li> <li>Monitoring the ISO installation console UI of the first node</li> <li>Monitoring the ISO installation console UI of the second node</li> <li>Monitoring the ISO installation console UI of the third node</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The ISO installation console UI enhancement can display correctly under the following single and multiple nodes scenarios.</p> keypairs.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/keypairs.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/keypairs.harvesterhci.io/ - GUI Enable VLAN network in settings Create a network with VLAN 5 and assume its name is my-network. C1. reate another network with VLAN 5: it should fails with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: VLAN ID 5 is already allocated Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal. + <h3 id="gui">GUI</h3> <ol> <li>Enable VLAN network in settings</li> <li>Create a network with VLAN 5 and assume its name is my-network. C1. reate another network with VLAN 5: it should fails with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: VLAN ID 5 is already allocated</li> <li>Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal.</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="gui-1">GUI</h3> <ol> <li>The operations should fail with.</li> </ol> Ksmd support merge_across_node on/off https://harvester.github.io/tests/manual/_incoming/2827_ksmd_support_merge_across_node_onoff_/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2827_ksmd_support_merge_across_node_onoff_/ - Ref: https://github.com/harvester/harvester/issues/2827 Verify Steps: Install Harvester with any nodes Login to Dashboard and Navigate to hosts Edit node1&rsquo;s Ksmtuned to Run and ThresCoef to 85 then Click Save Login to node1&rsquo;s console, execute kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt; Fields in spec should be the same as Dashboard configured Create an image for VM creation Create multiple VMs with 2Gi+ memory and schedule on &lt;node1&gt; (memory size reflect to &rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%) Execute watch -n1 grep . + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2827">https://github.com/harvester/harvester/issues/2827</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/193305898-48255477-1d19-48af-b132-3c019bd3f58b.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/193314630-7add9b5a-2d9e-49cb-8d3a-1075531145e8.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard and Navigate to hosts</li> <li>Edit <em>node1</em>&rsquo;s <strong>Ksmtuned</strong> to <code>Run</code> and <strong>ThresCoef</strong> to <code>85</code> then Click <strong>Save</strong></li> <li>Login to <em>node1</em>&rsquo;s console, execute <code>kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt;</code></li> <li>Fields in <code>spec</code> should be the same as Dashboard configured</li> <li>Create an image for VM creation</li> <li>Create multiple VMs with 2Gi+ memory and schedule on <code>&lt;node1&gt;</code> (memory size reflect to <!-- raw HTML omitted -->&rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%)</li> <li>Execute <code>watch -n1 grep . /sys/kernel/mm/ksm/*</code> to monitor ksm&rsquo;s status change <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>1</code> after VMs started</li> <li><code>/sys/kernel/mm/ksm/page_*</code> should updating continuously</li> </ul> </li> <li>Login to Dashboard then navigate to <em>Hosts</em>, click <!-- raw HTML omitted --></li> <li>In the Tab of <strong>Ksmtuned</strong>, values in Statistics section should not be <code>0</code>. (data in this section will be updated per min, so it not equals to console&rsquo;s output was expected.)</li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to check <code>Enable Merge Across Nodes</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be updated to <code>2</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be updated to <code>0</code></li> </ul> </li> <li>Restart all VMs scheduling to <code>&lt;node1&gt;</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be updated to <code>1</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be updated and less than Step.8 monitored</li> </ul> </li> </ol> Limit VM of guest cluster in the same namespace https://harvester.github.io/tests/manual/_incoming/2354-limit-vm-of-guest-cluster-same-namespace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2354-limit-vm-of-guest-cluster-same-namespace/ - Related issues: #2354 [FEATURE] Limit all VMs of the Harvester guest cluster in the same namespace Category: Rancher integration Verification Steps Import Harvester from Rancher Access Harvester via virtualization management Create a test project and ns1 namespace Create two RKE1 node template, one set to default namespace and another set to ns1 namespace Create a RKE1 cluster, select the first pool using the first node template Create another pool, check can&rsquo;t select the second node template Create a RKE2 cluster, set the first pool using specific namespace Add another machine pool, check it will automatically assigned the same namespace as the first pool Expected Results On RKE2 cluster page, when we select the first machine pool to specific namespace, then the second pool will automatically and can only use the same namespace as the first pool + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2354">#2354</a> [FEATURE] Limit all VMs of the Harvester guest cluster in the same namespace</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Access Harvester via virtualization management</li> <li>Create a test project and <code>ns1</code> namespace</li> <li>Create two RKE1 node template, one set to default namespace and another set to ns1 namespace</li> <li>Create a RKE1 cluster, select the first pool using the first node template</li> <li>Create another pool, check can&rsquo;t select the second node template</li> <li>Create a RKE2 cluster, set the first pool using specific namespace</li> <li>Add another machine pool, check it will automatically assigned the same namespace as the first pool</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>On RKE2 cluster page, when we select the first machine pool to specific namespace, then the second pool will automatically and can only use the same namespace as the first pool</p> Local cluster user input topology key https://harvester.github.io/tests/manual/_incoming/2567-local-cluster-user-input-topology-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2567-local-cluster-user-input-topology-key/ - Related issues: #2567 [BUG] Local cluster owner create Harvester cluster failed(RKE2) Category: Rancher integration Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Create cloud credential of Harvester Login with local user Open the provisioning RKE2 cluster page Select Advanced settings Add Pod Scheduling Select Pods in these namespaces Check can input Topology key value Expected Results Login with cluster owner role and provision a RKE2 cluster we can input the topology key in the Topology key field of the pod selector + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2567">#2567</a> [BUG] Local cluster owner create Harvester cluster failed(RKE2)</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user <code>local</code> in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add <code>local</code> user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Create cloud credential of Harvester</li> <li>Login with <code>local</code> user</li> <li>Open the provisioning RKE2 cluster page</li> <li>Select Advanced settings</li> <li>Add Pod Scheduling</li> <li>Select <code>Pods in these namespaces</code></li> <li>Check can input Topology key value</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login with cluster owner role and provision a RKE2 cluster</li> <li>we can input the topology key in the Topology key field of the pod selector <img src="https://user-images.githubusercontent.com/29251855/182752496-1fa49c1d-1b93-4147-9d5b-ef3a56d5bd2b.png" alt="image"></li> </ol> Logging Output Filter https://harvester.github.io/tests/manual/_incoming/2817-logging-output-filter/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2817-logging-output-filter/ - Related issues: #2817 [BUG]Logging Output needs filter Category: Audit Logging Verification Steps Create an Audit Only type of Output named audit-output Create an Audit Only type of ClusterOutput named audit-cluster-output Create a Flow, select the type to Logging or Event Check you can&rsquo;t select the audit-output and audiot-cluster-output select the type to Audit Check you can select the audit-output and audit-cluster-output Create a ClusterFlow, select the type to Logging or Event Check you can&rsquo;t select the audiot-cluster-output select the type to Audit Check you can select the audiot-cluster-output Create an logging/event type of Output named logging-event-output Create an logging/event type of ClusterOutput named logging-event-cluster-output Create a Flow, select the type to Logging or Event Check you can select the logging-event-output and logging-event-output Create a ClusterFlow, select the type to Logging or Event Check you can select the logging-event-output and logging-event-output Expected Results The logging or the Event type of Flow can only select Logging or Event type of Output Can&rsquo;t select the Audit type of Output The logging or the Event type of ClusterFlow can only select Logging or Event type of ClusterOutput Can&rsquo;t select the Audit type of ClusterOutput The Audit type of Flow can only select Audit type of Output The Audit type of ClusterFlow can only select Audit type of ClusterOutput + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2817">#2817</a> [BUG]Logging Output needs filter</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Audit Logging</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create an <code>Audit Only</code> type of Output named <code>audit-output</code> <img src="https://user-images.githubusercontent.com/29251855/193509247-09f5efd9-c43d-4514-bb84-55cd34c243b1.png" alt="image"></li> <li>Create an <code>Audit Only</code> type of ClusterOutput named <code>audit-cluster-output</code></li> <li>Create a Flow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can&rsquo;t</strong> select the <code>audit-output</code> and <code>audiot-cluster-output</code></li> <li>select the type to <code>Audit </code></li> <li>Check you <strong>can</strong> select the <code>audit-output</code> and <code>audit-cluster-output</code> <img src="https://user-images.githubusercontent.com/29251855/193510780-2f2f6d09-7ee6-433b-80ae-eb3879337513.png" alt="image"></li> <li>Create a ClusterFlow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can&rsquo;t</strong> select the <code>audiot-cluster-output</code></li> <li>select the type to <code>Audit</code></li> <li>Check you <strong>can</strong> select the <code>audiot-cluster-output</code></li> <li>Create an <code>logging/event</code> type of Output named <code>logging-event-output</code> <img src="https://user-images.githubusercontent.com/29251855/193512327-8ff2cadf-d02d-453f-96e9-fbc7d64ad91f.png" alt="image"></li> <li>Create an <code>logging/event</code> type of ClusterOutput named <code>logging-event-cluster-output</code> <img src="https://user-images.githubusercontent.com/29251855/193512534-82d03364-b2f2-4bcb-b676-814ab5a9da6d.png" alt="image"></li> <li>Create a Flow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can</strong> select the <code>logging-event-output</code> and <code>logging-event-output</code></li> <li>Create a ClusterFlow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can</strong> select the <code>logging-event-output</code> and <code>logging-event-output</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The <code>logging</code> or the <code>Event</code> type of <code>Flow</code> can only select <code>Logging</code> or <code>Event</code> type of <code>Output</code> <img src="https://user-images.githubusercontent.com/29251855/193512689-d56ddf11-0db8-4a10-ba9f-0425fb22710d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/193512719-4056e234-7e0a-49e4-9503-7bbd75075e0f.png" alt="image"></p> Login after password reset (e2e_fe) https://harvester.github.io/tests/manual/authentication/login-after-password-reset/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/login-after-password-reset/ - Enter the wrong credential. Enter the correct credential Expected Results Login should fail. Login should pass + <ol> <li>Enter the wrong credential.</li> <li>Enter the correct credential</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should fail.</li> <li>Login should pass</li> </ol> Logout from the UI and login again https://harvester.github.io/tests/manual/authentication/logout-then-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/logout-then-login/ - Logout from the UI and Log in again Expected Results User should be able to logout/login successfully. + <ol> <li>Logout from the UI and Log in again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to logout/login successfully.</li> </ol> Maintenance mode for host with multiple VMs https://harvester.github.io/tests/manual/hosts/maintenance-mode-multiple-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-multiple-vm/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Do health check on VMs Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Do health check on VMs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> </ol> Maintenance mode for host with one VM (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-one-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-one-vm/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Do health check on VMs Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Do health check on VMs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> </ol> Maintenance mode on node with no vms (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-no-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-no-vm/ - Put host in maintenance mode Wait for host to go from entering maintenance mode to maintenance mode. Expected Results Host should start to go into maintenance mode Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Wait for host to go from entering maintenance mode to maintenance mode.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Host should go into maintenance mode</li> </ol> Manual upgrade from 0.3.0 to 1.0.0 https://harvester.github.io/tests/manual/deployment/manual-upgrade-from-0.3.0-to-1.0.0/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/manual-upgrade-from-0.3.0-to-1.0.0/ - Related issues: #1644 Harvester pod crashes after upgrading from v0.3.0 to v1.0.0-rc1 (contain vm backup before upgrade) Related issues: #1588 VM backup cause harvester pod to crash Notice We recommend using zero downtime upgrade to upgrade harvester. Manual upgrade is for advance usage and purpose. Category: Manual Upgrade Verification Steps Download harvester v0.3.0 iso and do checksum Download harvester v1.0.0 iso and do checksum Use ISO Install a 4 nodes harvester cluster Create several OS images from URL Create ssh key Enable vlan network with harvester-mgmt Create virtual network vlan1 with id 1 Create 2 virtual machines ubuntu-vm: 2 core, 4GB memory, 30GB disk Setup backup target Take a backup from ubuntu vm Peform manual upgrade steps in the following docudment upgrade process Follow the manual upgrade steps to upgrade from v0. + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1644">#1644</a> Harvester pod crashes after upgrading from v0.3.0 to v1.0.0-rc1 (contain vm backup before upgrade)</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1588">#1588</a> VM backup cause harvester pod to crash</p> </li> </ul> <h2 id="notice">Notice</h2> <p>We recommend using zero downtime upgrade to upgrade harvester. Manual upgrade is for advance usage and purpose.</p> <h2 id="category">Category:</h2> <ul> <li>Manual Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download harvester v0.3.0 iso and do checksum</li> <li>Download harvester v1.0.0 iso and do checksum</li> <li>Use ISO Install a 4 nodes harvester cluster</li> <li>Create several OS images from URL</li> <li>Create ssh key</li> <li>Enable vlan network with <code>harvester-mgmt</code></li> <li>Create virtual network <code>vlan1</code> with id <code>1</code></li> <li>Create 2 virtual machines</li> </ol> <ul> <li>ubuntu-vm: 2 core, 4GB memory, 30GB disk</li> </ul> <ol> <li>Setup backup target</li> <li>Take a backup from ubuntu vm</li> <li>Peform manual upgrade steps in the following docudment</li> </ol> <p><strong>upgrade process</strong> Follow the manual upgrade steps to upgrade from v0.3.0 to v1.0.0-rc1 <a href="https://github.com/harvester/docs/blob/a4be9a58441eeee3b5564b70e499dc69c6040cc8/docs/upgrade.md">https://github.com/harvester/docs/blob/a4be9a58441eeee3b5564b70e499dc69c6040cc8/docs/upgrade.md</a></p> Memory overcommit on VM https://harvester.github.io/tests/manual/virtual-machines/memory_overcommit/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/memory_overcommit/ - Ref: https://github.com/harvester/harvester/issues/1537 Verify Items Overcommit can be edit on Dashboard VM can allocate exceed Memory on the host Node VM can chage allocated Memory after created Case: Update Overcommit configuration Install Harvester with any Node Login to Dashboard, then navigate to Advanced Settings Edit overcommit-config The field of Memory should be editable Created VM can allocate maximum Memory should be &lt;HostMemory&gt; * [&lt;overcommit-Memory&gt;/100] - &lt;Host Reserved&gt; Case: VM can allocate Memory more than Host have Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostMemory&gt; * 1. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1537">https://github.com/harvester/harvester/issues/1537</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Overcommit can be edit on Dashboard</li> <li>VM can allocate exceed Memory on the host Node</li> <li>VM can chage allocated Memory after created</li> </ul> <h2 id="case-update-overcommit-configuration">Case: Update Overcommit configuration</h2> <ol> <li>Install Harvester with any Node</li> <li>Login to Dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <code>overcommit-config</code></li> <li>The field of <strong>Memory</strong> should be editable</li> <li>Created VM can allocate maximum Memory should be <code>&lt;HostMemory&gt; * [&lt;overcommit-Memory&gt;/100] - &lt;Host Reserved&gt;</code></li> </ol> <h2 id="case-vm-can-allocate-memory-more-than-host-have">Case: VM can allocate Memory more than Host have</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostMemory&gt; * 1.2</code> Memory</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated Memory</li> <li>Page of Virtual Machines should display allocated Memory correctly</li> </ol> <h2 id="case-update-vm-allocated-memory">Case: Update VM allocated Memory</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostMemory&gt; * 1.2</code> Memory</li> <li>VM should start successfully</li> <li>Increase/Reduce VM allocated Memory to minimum/maximum</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated Memory</li> <li>Page of Virtual Machines should display allocated Memory correctly</li> </ol> Migrate a turned on VM from one host to another https://harvester.github.io/tests/manual/live-migration/migrate-turned-on-vm-to-another-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-turned-on-vm-to-another-host/ - Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM created with cloud init config data https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-cloud-init/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-cloud-init/ - Create a new VM with cloud init config data Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with cloud init config data</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM created with user data config https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-user-data/ - Create a new VM with a password specified by user data config Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with a password specified by user data config</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that has multiple volumes https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-volumes/ - Create a new VM with a root disk and a CDROM volume Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with a root disk and a CDROM volume</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that was created from a template https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-template/ - Create a new VM from a template Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM from a template</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that was created using a restore backup to new VM https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-restore-to-new/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-restore-to-new/ - Take an existing backup Restore the backup to a new VM Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Take an existing backup</li> <li>Restore the backup to a new VM</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with 1 backup https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-one-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-one-backup/ - Create a new VM Create a backup Add a new file to the home directory Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM</li> <li>Create a backup</li> <li>Add a new file to the home directory</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with a saved SSH Key https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-ssh/ - Create a new VM with an SSH key Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with an SSH key</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with multiple backups https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-backups/ - Create a new VM Create a backup Add a new file to the home directory Create a new backup Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM</li> <li>Create a backup</li> <li>Add a new file to the home directory</li> <li>Create a new backup</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with multiple networks https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-networks/ - Create a new VM with one management network in masquerade mode one VLAN network Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with <ul> <li>one management network in masquerade mode</li> <li>one VLAN network</li> </ul> </li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate back VMs that were on host after taking host out of maintenance mode https://harvester.github.io/tests/manual/hosts/q-maintenance-mode-migrate-back-vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/q-maintenance-mode-migrate-back-vms/ - Prerequisite: Have a Harvester cluster with at least 2 nodes setup. Test Steps: Given Create a vm with node selector lets say node-1. And Create a vm without node selector on node-1. AND Write some data into both the VMs. When Put the host node-1 into maintenance mode. Then All the Vms on node-1 should be migrated to other nodes or the node should show warning that the vm with node selector can&rsquo;t migrate. + <h3 id="prerequisite">Prerequisite:</h3> <p>Have a Harvester cluster with at least 2 nodes setup.</p> <h3 id="test-steps">Test Steps:</h3> <p><strong>Given</strong> Create a vm with node selector lets say node-1.</p> <p><strong>And</strong> Create a vm without node selector on node-1.</p> <p><strong>AND</strong> Write some data into both the VMs.</p> <p><strong>When</strong> Put the host node-1 into maintenance mode.</p> <p><strong>Then</strong> All the Vms on node-1 should be migrated to other nodes or the node should show warning that the vm with node selector can&rsquo;t migrate.</p> Migrate to Node without replicaset https://harvester.github.io/tests/manual/live-migration/migrate-to-node-without-replicaset/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-to-node-without-replicaset/ - Create a new VM on a 4 node cluster Check which nodes have copies of the replica set Migrate the VM to the host that does not have the volume Expected Results VM should create correctly + <ol> <li>Create a new VM on a 4 node cluster</li> <li>Check which nodes have copies of the replica set</li> <li>Migrate the VM to the host that does not have the volume</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create correctly</li> </ol> Migrate VM from Restored backup https://harvester.github.io/tests/manual/live-migration/restored_vm_migration/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/restored_vm_migration/ - Ref: https://github.com/harvester/harvester/issues/1086 Verify Items VM can be migrate to any node with any times Case: Migrate a restored VM Install Harvester with at least 2 nodes setup backup-target with NFS Create image for VM creation Create VM a Add file with some data in VM a Backup VM a as a-bak Restore backup a-bak into VM b Start VM b then check added file should exist with same content Migrate VM b to another node, then check added file should exist with same content Migrate VM b again, then check added file should exist with same content + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1086">https://github.com/harvester/harvester/issues/1086</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VM can be migrate to any node with any times</li> </ul> <h2 id="case-migrate-a-restored-vm">Case: Migrate a restored VM</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>setup backup-target with NFS</li> <li>Create image for VM creation</li> <li>Create VM <strong>a</strong></li> <li>Add file with some data in VM <strong>a</strong></li> <li>Backup VM <strong>a</strong> as <strong>a-bak</strong></li> <li>Restore backup <strong>a-bak</strong> into VM <strong>b</strong></li> <li>Start VM <strong>b</strong> then check added file should exist with same content</li> <li>Migrate VM <strong>b</strong> to another node, then check added file should exist with same content</li> <li>Migrate VM <strong>b</strong> again, then check added file should exist with same content</li> </ol> Move Longhorn storage to another partition https://harvester.github.io/tests/manual/hosts/move-longhorn-storage-to-another-partition/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/move-longhorn-storage-to-another-partition/ - Related issue: #1316 Move Longhorn storage to another partition Category: Storage Test Scenarios Case 1: UEFI + GPT (Disk &lt; MBR Limit) Case 2: BIOS + No MBR (Disk &lt; MBR Limit) Case 3: BIOS + Force MBR (Disk &lt; MBR Limit) Case 4: BIOS + No MBR (Disk &gt; MBR Limit) Case 5: BIOS + Force MBR (Disk &gt; MBR Limit) Case 6: UEFI + GPT (Disk &gt; MBR Limit) Environment setup Test Environment: 1 node harvester on local kvm machine + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1316">#1316</a> Move Longhorn storage to another partition</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="test-scenarios">Test Scenarios</h2> <p><img src="https://user-images.githubusercontent.com/29251855/148171176-5dfe439b-8f61-484b-8c16-9c0236a5c1f2.png" alt="image"></p> <ul> <li>Case 1: UEFI + GPT (Disk &lt; MBR Limit)</li> <li>Case 2: BIOS + No MBR (Disk &lt; MBR Limit)</li> <li>Case 3: BIOS + Force MBR (Disk &lt; MBR Limit)</li> <li>Case 4: BIOS + No MBR (Disk &gt; MBR Limit)</li> <li>Case 5: BIOS + Force MBR (Disk &gt; MBR Limit)</li> <li>Case 6: UEFI + GPT (Disk &gt; MBR Limit)</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ul> <li> <p>Test Environment: 1 node harvester on local kvm machine</p> Multi-browser login https://harvester.github.io/tests/manual/authentication/multi-browser-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/multi-browser-login/ - Login via Chrome, firefox, edge, safari etc Expected Results Chrome, firefox, edge, safari etc should have same behavior. + <ol> <li>Login via Chrome, firefox, edge, safari etc</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Chrome, firefox, edge, safari etc should have same behavior.</li> </ol> Multiple Disks Swapping Paths https://harvester.github.io/tests/manual/_incoming/1874-extra-disk-swap-path/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1874-extra-disk-swap-path/ - Related issues: #1874 Multiple Disks Swapping Paths Verification Steps Prepare a harvester cluster (single node is sufficient) Prepare two additional disks and format both of them. Hotplug both disks and add them to the host via Harvester Dashboard (&ldquo;Hosts&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo;) Shutdown the host. Swap the address and slot of the two disks in order to make their dev paths swapped For libvirt environment, you can swap &lt;address&gt; and &lt;target&gt; in the XML of the disk. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1874">#1874</a> Multiple Disks Swapping Paths</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a harvester cluster (single node is sufficient)</li> <li>Prepare two additional disks and format both of them.</li> <li>Hotplug both disks and add them to the host via Harvester Dashboard (&ldquo;Hosts&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo;)</li> <li>Shutdown the host.</li> <li>Swap the address and slot of the two disks in order to make their dev paths swapped <ul> <li>For libvirt environment, you can swap <code>&lt;address&gt;</code> and <code>&lt;target&gt;</code> in the XML of the disk.</li> </ul> </li> <li>Reboot the host</li> <li>Navigate to the &ldquo;Host&rdquo; page, both disks should be healthy and scheduled.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disks should be healthy and <code>scheduable</code> after paths swapped.</li> </ol> Namespace pending on terminating https://harvester.github.io/tests/manual/_incoming/2591_namespace_pending_on_terminating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2591_namespace_pending_on_terminating/ - Ref: https://github.com/harvester/harvester/issues/2591 Verify Steps: Install Harvester with any nodes Login to dashboard and navigate to Namespaces Trying to delete any namespaces, prompt windows should shows warning message + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2591">https://github.com/harvester/harvester/issues/2591</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/185376639-66d10a36-7f68-4689-9cd6-4ef6034f1aac.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to dashboard and navigate to <em>Namespaces</em></li> <li>Trying to delete any namespaces, prompt windows should shows warning message</li> </ol> Negative change backup target while restoring backup https://harvester.github.io/tests/manual/_incoming/2560-change-backup-target-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2560-change-backup-target-while-restoring/ - Related issues: #2560 [BUG] VM hanging on restoring state when backup-target disconnected suddenly Category: Category Verification Steps Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3 Create Image for VM creation Create VM vm1 Take Backup vm1b from vm1 Restore the backup vm1b to New/Existing VM When the VM still in restoring state, update backup-target settings to Use the default value then setup it back. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2560">#2560</a> [BUG] VM hanging on restoring state when backup-target disconnected suddenly</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Category</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3</li> <li>Create Image for VM creation</li> <li>Create VM vm1</li> <li>Take Backup vm1b from vm1</li> <li>Restore the backup vm1b to New/Existing VM</li> <li>When the VM still in restoring state, update backup-target settings to Use the default value then setup it back.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error <img src="https://user-images.githubusercontent.com/5169694/182815277-98baa7bc-42d1-4404-be87-d60f3b6ba1fd.png" alt="image"></li> </ol> Negative create backup on store that is full (NFS) https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-full-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-full-backup-target/ - Initiate a backup with existing VM where the NFS store is full Expected Results You should get an error + <ol> <li>Initiate a backup with existing VM where the NFS store is full</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative Create Backup Target https://harvester.github.io/tests/manual/backup-and-restore/negative-create-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-create-backup-target/ - Open up Backup-target in settings Input Incorrect server info Save Expected Results You should get an error on saving + <ol> <li>Open up Backup-target in settings</li> <li>Input Incorrect server info</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative delete backup while restore is in progress https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-backup-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-backup-while-restoring/ - Create a backup of VM which has data more than 10Gi. Add 2Gi data in the same VM. Initiate deletion of the backup. While deletion is in progress, create another backup Expected Results Creation of backup should be prevented as there is a deletion is in progress. Once the deletion is completed, the backup creation should take place + <ol> <li>Create a backup of VM which has data more than 10Gi.</li> <li>Add 2Gi data in the same VM.</li> <li>Initiate deletion of the backup.</li> <li>While deletion is in progress, create another backup</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Creation of backup should be prevented as there is a deletion is in progress.</li> <li>Once the deletion is completed, the backup creation should take place</li> </ol> Negative delete multiple backups https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-multiple-backups/ - Disconnect Backup Target Select multiple Backups from Backups list Click Delete Expected Results You should get an error + <ol> <li>Disconnect Backup Target</li> <li>Select multiple Backups from Backups list</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative delete single backup https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-single-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-single-backup/ - Take down backup target either by account, or via network blocking Delete backup from backups list Expected Results You should get an error + <ol> <li>Take down backup target either by account, or via network blocking</li> <li>Delete backup from backups list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative delete Volume that is in use (e2e_be) https://harvester.github.io/tests/manual/volumes/negative-delete-volume-that-is-in-use/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/negative-delete-volume-that-is-in-use/ - Navigate to Volumes page and check for a volume in use by a VM Try to delete volume Click delete on modal Expected Results Page should load You should get an error message on the delete modal + <ol> <li>Navigate to Volumes page and check for a volume in use by a VM</li> <li>Try to delete volume</li> <li>Click delete on modal</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>You should get an error message on the delete modal</li> </ol> Negative disrupt backup server while restore is in progress https://harvester.github.io/tests/manual/backup-and-restore/negative-disrupt-backup-target-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-disrupt-backup-target-while-restoring/ - Initiate a backup restore from NFS server. Disconnect network from NFS server for 5 secs Verify the restore status Expected Results The restore is not be interrupted and should complete. Data should be intact + <ol> <li>Initiate a backup restore from NFS server.</li> <li>Disconnect network from NFS server for 5 secs</li> <li>Verify the restore status</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The restore is not be interrupted and should complete.</li> <li>Data should be intact</li> </ol> Negative edit backup read from file YAML https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-file/ - Disconnect backup target Edit YAML for backup Read from File Show Diff Save Expected Results You should get an error on saving + <ol> <li>Disconnect backup target</li> <li>Edit YAML for backup</li> <li>Read from File</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative edit backup YAML https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-yaml/ - Disconnect backup target Edit YAML for backup Show Diff Save Expected Results You should get an error on saving + <ol> <li>Disconnect backup target</li> <li>Edit YAML for backup</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative Harvester installer input same NIC IP and VIP https://harvester.github.io/tests/manual/_incoming/2229-2377-negative-installer-same-nic-ip-and-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2229-2377-negative-installer-same-nic-ip-and-vip/ - Related issues: #2229 [BUG] input nic ip and vip with same ip address in Harvester-Installer Related issues: #2377 [Backport v1.0.3] input nic ip and vip with same ip address in Harvester-Installer Category: Installation Verification Steps Boot into ISO installer Specify same IP for NIC and VIP Expected Results Error message is displayed + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2229">#2229</a> [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2377">#2377</a> [Backport v1.0.3] input nic ip and vip with same ip address in Harvester-Installer</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Boot into ISO installer</li> <li>Specify same IP for NIC and VIP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Error message is displayed <img src="https://user-images.githubusercontent.com/83787952/178049998-e4eec9fe-d687-4efc-9618-940432d37a3d.png" alt="image"></li> </ol> Negative initiate a backup while system is taking another backup https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-while-taking-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-while-taking-backup/ - Start a VM backup, bk-1 of a VM which has data d1 While the backup is in progress, write some more data d2 in the VM disk and initiate another backup bk-2. Verify the backup 1 and backup 2 Expected Results Backup bk-1 should have only d1 data backup bk-2 should have data d1 and d2 + <ol> <li>Start a VM backup, bk-1 of a VM which has data d1</li> <li>While the backup is in progress, write some more data d2 in the VM disk and initiate another backup bk-2.</li> <li>Verify the backup 1 and backup 2</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup bk-1 should have only d1 data</li> <li>backup bk-2 should have data d1 and d2</li> </ol> Negative migrate a turned on VM from one host to another https://harvester.github.io/tests/manual/live-migration/negative-migrate-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-migrate-vm/ - Migrate the VM from one host in the cluster to another Turn off/disconnect node while migrating Expected Results Migration should fail You should get an error message in the status + <ol> <li>Migrate the VM from one host in the cluster to another</li> <li>Turn off/disconnect node while migrating</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail</li> <li>You should get an error message in the status</li> </ol> Negative network comes back up after reboot external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/negative-vlan-after-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-vlan-after-reboot/ - Start pinging the VM reboot the VM Expected Results The VM should respond The VM should reboot The pings should stop getting responses The pings should start getting responses again + <ol> <li>Start pinging the VM</li> <li>reboot the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should respond</li> <li>The VM should reboot</li> <li>The pings should stop getting responses</li> <li>The pings should start getting responses again</li> </ol> Negative network comes back up after reboot management network (e2e_be) https://harvester.github.io/tests/manual/network/negative-management-after-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-management-after-reboot/ - Start pinging the VM from the management network reboot the VM Expected Results The VM should respond The VM should reboot The pings should stop getting responses The pings should start getting responses again + <ol> <li>Start pinging the VM from the management network</li> <li>reboot the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should respond</li> <li>The VM should reboot</li> <li>The pings should stop getting responses</li> <li>The pings should start getting responses again</li> </ol> Negative network disconnection for a longer time while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-network-disconnect-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-network-disconnect-while-migrating/ - Initiate VM migration While migration is in progress, disconnect network for 100 sec on the node where the VM is scheduled Expected Results Migration should fail but volume data should be intact The VM should be accessible during the migration and should also be accessible once the migration fails + <ol> <li>Initiate VM migration</li> <li>While migration is in progress, disconnect network for 100 sec on the node where the VM is scheduled</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail but volume data should be intact</li> <li>The VM should be accessible during the migration and should also be accessible once the migration fails</li> </ol> Negative network disconnection for a short time while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-network-disruption-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-network-disruption-while-migrating/ - Initiate VM migration. While migration is in progress, disconnect network for 5 sec on the node where the VM is scheduled Expected Results Migration should resume once the network is up again The VM should be accessible during and after the migration + <ol> <li>Initiate VM migration.</li> <li>While migration is in progress, disconnect network for 5 sec on the node where the VM is scheduled</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should resume once the network is up again</li> <li>The VM should be accessible during and after the migration</li> </ol> Negative node down while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-node-down-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-node-down-while-migrating/ - Initiate VM migration. While migration is in progress, shut the node where the VM is scheduled. After failure, initiate the migration to another node Expected Results Migration should fail but volume data should be intact The VM should be accessible on older node The migration scheduled for another node should work fine The VM should be accessible during and after the migration + <ol> <li>Initiate VM migration.</li> <li>While migration is in progress, shut the node where the VM is scheduled.</li> <li>After failure, initiate the migration to another node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail but volume data should be intact</li> <li>The VM should be accessible on older node</li> <li>The migration scheduled for another node should work fine</li> <li>The VM should be accessible during and after the migration</li> </ol> Negative node un-schedulable during live migration https://harvester.github.io/tests/manual/live-migration/negative-node-unschedulable/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-node-unschedulable/ - Prerequisite: Cluster is of 3 nodes. VM is running on Node-1 Node-2 and Node-3 don&rsquo;t have space to migrate a VM to them. Steps: Create a vm on node-1 Migrate the VM. Expected Results Migration should not be started. Relevant error should be shown on the GUI. The existing VM should be accessible and the health check of the VM should be fine + <h2 id="prerequisite">Prerequisite:</h2> <ol> <li>Cluster is of 3 nodes.</li> <li>VM is running on Node-1</li> <li>Node-2 and Node-3 don&rsquo;t have space to migrate a VM to them.</li> </ol> <h2 id="steps">Steps:</h2> <ol> <li>Create a vm on node-1</li> <li>Migrate the VM.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should not be started.</li> <li>Relevant error should be shown on the GUI.</li> <li>The existing VM should be accessible and the health check of the VM should be fine</li> </ol> Negative Power down the node where the VM is getting replaced by the restore https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring-replace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring-replace/ - Related issues tests#1263 [ReleaseTesting] Negative Power down the node where the VM is getting replaced by the restore Verification Steps Setup a 3 nodes harvester Create a VM w/ extra disk and some data Backup and shutdown VM Start to observe pod/virt-launcher-VMNAME to get the node VM restoring on for next step. Initiate a restore with existing VM, get node info from pod/virt-launcher-VMNAME. While the restore is in progress and VM is starting on a node, shut down the node + <ul> <li>Related issues <ul> <li><a href="https://github.com/harvester/tests/issues/1263">tests#1263</a> [ReleaseTesting] Negative Power down the node where the VM is getting replaced by the restore</li> </ul> </li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Setup a 3 nodes harvester</p> </li> <li> <p>Create a VM w/ extra disk and some data</p> </li> <li> <p>Backup and shutdown VM</p> </li> <li> <p>Start to observe <code>pod/virt-launcher-VMNAME</code> to get the node VM restoring on for next step.</p> Negative power down the node where the VM is getting restored https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring/ - Initiate a restore. While the restore is in progress and VM is starting on a node, shut down the node Expected Results The restore should fail + <ol> <li>Initiate a restore.</li> <li>While the restore is in progress and VM is starting on a node, shut down the node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The restore should fail</li> </ol> Negative Restore a backup while VM is restoring https://harvester.github.io/tests/manual/_incoming/2559-negative-restore-backup-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2559-negative-restore-backup-restoring/ - Related issues: #2559 [BUG] Backup unable to be restored and the VM can&rsquo;t be deleted Category: Backup/Restore Verification Steps Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3 Create Image for VM creation Create VM vm1 Take backup from vm1 as vm1b Take backup from vm1 as vm1b2 Click Edit YAML of vm1b, update field status.source.spec.spec.domain.cpu.cores, increase 1 Stop VM vm1 Restore backup vm1b2 with Replace Existing Restore backup vm1b with Replace Existing when the VM vm1 still in state restoring Expected Results You should get an error when trying to restore. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2559">#2559</a> [BUG] Backup unable to be restored and the VM can&rsquo;t be deleted</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Backup/Restore</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3</li> <li>Create Image for VM creation</li> <li>Create VM vm1</li> <li>Take backup from vm1 as vm1b</li> <li>Take backup from vm1 as vm1b2</li> <li>Click Edit YAML of vm1b, update field status.source.spec.spec.domain.cpu.cores, increase 1</li> <li>Stop VM vm1</li> <li>Restore backup vm1b2 with Replace Existing</li> <li>Restore backup vm1b with Replace Existing when the VM vm1 still in state restoring</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error when trying to restore. <img src="https://user-images.githubusercontent.com/5370752/182722180-3e2f606b-beef-4f8b-8f33-8d235587db4b.png" alt="image"></li> </ol> Negative restore backup replace existing VM https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace/ - On multi-node setup bring down node that is hosting VM Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Expected Results You should get an error on restoring + <ol> <li>On multi-node setup bring down node that is hosting VM</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on restoring</li> </ol> Negative restore backup replace existing VM with backup from same VM that is turned on https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-deleting-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-deleting-backup/ - Make sure VM is turned on Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Delete backup while restoring Expected Results You should get an error + <ol> <li>Make sure VM is turned on</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Delete backup while restoring</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative restore backup replace existing VM with backup from same VM that is turned on (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-turned-on/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-turned-on/ - Make sure VM is turned on Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Expected Results You get an error that you have to stop VM before restoring backup + <ol> <li>Make sure VM is turned on</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You get an error that you have to stop VM before restoring backup</li> </ol> Negative vm clone tests https://harvester.github.io/tests/manual/virtual-machines/negative-vm-clone/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-vm-clone/ - Case 1 Create a harvester cluster. Create a VM source-vm with 3 volumes: Image Volume Volume Container Volume After VM starts, run command echo &quot;123&quot; &gt; test.txt &amp;&amp; sync. Click clone button on the source-vm and input new VM name target-vm. Delete source-vm while still cloning Expected Results target-vm should finish cloning After cloning run command cat ~/test.txt in the target-vm. The result should be 123. Case 2 Create a harvester cluster. + <h3 id="case-1">Case 1</h3> <ol> <li>Create a harvester cluster.</li> <li>Create a VM <code>source-vm</code> with 3 volumes: <ul> <li>Image Volume</li> <li>Volume</li> <li>Container Volume</li> </ul> </li> <li>After VM starts, run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code>.</li> <li>Click <code>clone</code> button on the <code>source-vm</code> and input new VM name <code>target-vm</code>.</li> <li>Delete <code>source-vm</code> while still cloning</li> </ol> <h4 id="expected-results">Expected Results</h4> <ul> <li><code>target-vm</code> should finish cloning</li> <li>After cloning run command <code>cat ~/test.txt</code> in the <code>target-vm</code>. The result should be <code>123</code>.</li> </ul> <h3 id="case-2">Case 2</h3> <ol> <li>Create a harvester cluster.</li> <li>Create a VM <code>source-vm</code> with 3 volumes: <ul> <li>Image Volume</li> <li>Volume</li> <li>Container Volume</li> </ul> </li> <li>After VM starts, run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code>.</li> <li>Click <code>clone</code> button on the <code>source-vm</code> and input new VM name <code>target-vm</code>.</li> <li>Turn off node that has <code>source-vm</code> while cloning</li> <li>Wait for clone to finish</li> </ol> <h4 id="expected-results-1">Expected Results</h4> <ul> <li><code>target-vm</code> should finish cloning on node</li> <li><code>source-vm</code> should have migrated to new node</li> <li>After cloning run command <code>cat ~/test.txt</code> in the <code>target-vm</code>. The result should be <code>123</code>.</li> </ul> network-attachment-definitions.k8s.cni.cncf.io https://harvester.github.io/tests/manual/webhooks/q-network-attachment-definitions.k8s.cni.cncf.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/q-network-attachment-definitions.k8s.cni.cncf.io/ - GUI Enable VLAN network in settings Create a network with VLAN 5 and assume its name is my-network. Create another network with VLAN 5: it should fails with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: VLAN ID 5 is already allocated Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal Expected Results GUI Unsure of desired behavior. + <h3 id="gui">GUI</h3> <ol> <li>Enable VLAN network in settings</li> <li>Create a network with VLAN 5 and assume its name is my-network.</li> <li>Create another network with VLAN 5: it should fails with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: VLAN ID 5 is already allocated</li> <li>Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="gui-1">GUI</h3> <p>Unsure of desired behavior. Checking and will update.</p> Networkconfigs function check https://harvester.github.io/tests/manual/_incoming/2841-networkconfigs-function-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2841-networkconfigs-function-check/ - Related issues: #2841 [FEATURE] Reorganize the networkconfigs UI Category: Network Verification Steps Go to Cluster Networks/Configs Create a cluster network and provide the name Create a Network Config Given the NICs that not been used by mgmt-bo (eg. ens1f1) Use default active-backup mode Check the cluster network config in Active status Go to Networks Create a VLAN network Given the name and vlan id Select the cluster network from drop down list Check the vlan route activity Check the NIC ens1f1 can bind to the cnetwork-bo + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2841">#2841</a> [FEATURE] Reorganize the networkconfigs UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Go to Cluster Networks/Configs</p> </li> <li> <p>Create a cluster network and provide the name <img src="https://user-images.githubusercontent.com/29251855/194039791-90a88cc0-879d-44d1-8b81-66a141c13732.png" alt="image"></p> </li> <li> <p>Create a Network Config</p> </li> <li> <p>Given the NICs that not been used by mgmt-bo (eg. <code>ens1f1</code>)<br> <img src="https://user-images.githubusercontent.com/29251855/194040174-72813f78-868f-4d02-9f79-023c61632994.png" alt="image"></p> </li> <li> <p>Use default <code>active-backup</code> mode</p> NIC ip and vip can't be the same in Harvester installer https://harvester.github.io/tests/manual/_incoming/2229-2449-nic-ip-vip-different-harvester-installer-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2229-2449-nic-ip-vip-different-harvester-installer-copy/ - Related issues: #2229 [BUG] input nic ip and vip with same ip address in Harvester-Installer Related issues: #2449 [backport v1.0] [BUG] input nic ip and vip with same ip address in Harvester-Installer Category: Harvester Installer Verification Steps Launch ISO install process Set static node IP and gateway Set the same node IP to the VIP field and press enter Expected Results During Harvester ISO installer process, when we set static node IP address with the same one as the VIP IP address There will be an error message to prevent the installation process VIP must not be the same as Management NIC IP + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2229">#2229</a> [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2449">#2449</a> [backport v1.0] [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Launch ISO install process</li> <li>Set static node IP and gateway <img src="https://user-images.githubusercontent.com/29251855/173719118-1fd1609d-74f2-4f7d-9ff3-e1d21227e542.png" alt="image"></li> <li>Set the same node IP to the VIP field and press enter<br> <img src="https://user-images.githubusercontent.com/29251855/173719257-f60b55fd-0211-4fb7-8f45-3176eef4e577.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>During Harvester ISO installer process, when we set static node IP address with the same one as the VIP IP address</li> <li>There will be an error message to prevent the installation process <code>VIP must not be the same as Management NIC IP</code> <img src="https://user-images.githubusercontent.com/29251855/173719257-f60b55fd-0211-4fb7-8f45-3176eef4e577.png" alt="image"></li> </ul> Node disk manager should prevent too many concurrent disk formatting occur within a short period https://harvester.github.io/tests/manual/_incoming/1831_node_disk_manager_should_prevent_too_many_concurrent_disk_formatting_occur_within_a_short_period/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1831_node_disk_manager_should_prevent_too_many_concurrent_disk_formatting_occur_within_a_short_period/ - Ref: https://github.com/harvester/harvester/issues/1831 Criteria exceed the maximum, there should have requeue devices which equals the exceeds hit the maximum, there should not have requeue devices less than maximum, there should not have requeue devices Verify Steps: Install Harvester with any node having at least 6 additional disks Login to console and execute command to update log level to debug and max-concurrent-ops to 1 (On KVM environment, we have to set to 1 to make sure the requeuing will happen. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1831">https://github.com/harvester/harvester/issues/1831</a></p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> exceed the maximum, there should have requeue devices which equals the exceeds</li> <li><input checked="" disabled="" type="checkbox"> hit the maximum, there should not have requeue devices</li> <li><input checked="" disabled="" type="checkbox"> less than maximum, there should not have requeue devices</li> </ul> <p><img src="https://user-images.githubusercontent.com/5169694/177324553-3b4800b2-9db9-45ec-a3cf-a630acb384cf.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any node having at least 6 additional disks</li> <li>Login to console and execute command to update log level to <code>debug</code> and <code>max-concurrent-ops</code> to <code>1</code> (On KVM environment, we have to set to <code>1</code> to make sure the <em>requeuing</em> will happen.) <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;, &quot;--max-concurrent-ops&quot;, &quot;1&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add more than <code>1</code> disks</li> <li>In the console log, should display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> <li>Login to console and execute command to update log level to <code>debug</code> and <code>max-concurrent-ops</code> to <code>2</code> <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;, &quot;--max-concurrent-ops&quot;, &quot;2&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add <code>2</code> disks</li> <li>In the console log, there should not display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> <li>Login to console and execute command to update log level to <code>debug</code> <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add less than <code>5</code> disks</li> <li>In the console log, there should not display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> </ol> Node join fails with self-signed certificate https://harvester.github.io/tests/manual/_incoming/2736_node_join_fails_with_self-signed_certificate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2736_node_join_fails_with_self-signed_certificate/ - Ref: https://github.com/harvester/harvester/issues/2736 Verified this bug has been fixed. Test Information Environment: qemu/KVM 2 nodes Harvester Version: master-032742f0-head ui-source Option: Auto Verify Steps: Follow Steps in https://github.com/harvester/harvester-installer/pull/335 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2736">https://github.com/harvester/harvester/issues/2736</a></p> <p>Verified this bug has been fixed.</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 2 nodes</strong></li> <li>Harvester Version: <strong>master-032742f0-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ul> <li>Follow Steps in <a href="https://github.com/harvester/harvester-installer/pull/335">https://github.com/harvester/harvester-installer/pull/335</a></li> </ul> Node Labeling for VM scheduling https://harvester.github.io/tests/manual/hosts/node_labeling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/node_labeling/ - Ref: https://github.com/harvester/harvester/issues/1416 Verify Items Host labels can be assigned during installation via config-create / config-join YAML. Host labels can be managed post installation via the Harvester UI. Host label information can be accessed in Rancher Virtualization Management UI. Case: Label node when installing Install Harvester with config file and os.labels option Navigate to Host details then navigate to Labels in Config Check additional labels should be displayed Case: Label node after installed Install Harvester with at least 2 nodes Navigate to Host details then navigate to Labels in Config Use edit config to modify labels Reboot the Node and wait until its state become active Navigate to Host details then Navigate to Labels in Config Check modified labels should be displayed Case: Node&rsquo;s Label availability Install Harvester with at least 2 nodes Navigate to Host details then navigate to Labels in Config Use edit config to modify labels Reboot the Node and wait until its state become active Navigate to Host details then Navigate to Labels in Config Check modified labels should be displayed Install Rancher with any nodes Navigate to Virtualization Management and import former created Harvester Wait Until state become Active Click Name field to visit dashboard repeat step 2-7, and both compare from Harvester&rsquo;s dashboard (accessing via Harvester&rsquo;s VIP) + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1416">https://github.com/harvester/harvester/issues/1416</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Host labels can be assigned during installation via config-create / config-join YAML.</li> <li>Host labels can be managed post installation via the Harvester UI.</li> <li>Host label information can be accessed in Rancher Virtualization Management UI.</li> </ul> <h2 id="case-label-node-when-installing">Case: Label node when installing</h2> <ol> <li>Install Harvester with config file and <a href="https://docs.harvesterhci.io/v1.0/install/harvester-configuration/#oslabels"><strong>os.labels</strong></a> option</li> <li>Navigate to Host details then navigate to Labels in Config</li> <li>Check additional labels should be displayed</li> </ol> <h2 id="case-label-node-after-installed">Case: Label node after installed</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Host details then navigate to <strong>Labels</strong> in Config</li> <li>Use <strong>edit config</strong> to modify labels</li> <li>Reboot the Node and wait until its state become active</li> <li>Navigate to Host details then Navigate to Labels in Config</li> <li>Check modified labels should be displayed</li> </ol> <h2 id="case-nodes-label-availability">Case: Node&rsquo;s Label availability</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Host details then navigate to <strong>Labels</strong> in Config</li> <li>Use <strong>edit config</strong> to modify labels</li> <li>Reboot the Node and wait until its state become active</li> <li>Navigate to Host details then Navigate to Labels in Config</li> <li>Check modified labels should be displayed</li> <li>Install Rancher with any nodes</li> <li>Navigate to <em>Virtualization Management</em> and import former created Harvester</li> <li>Wait Until state become <strong>Active</strong></li> <li>Click <em>Name</em> field to visit dashboard</li> <li>repeat step 2-7, and both compare from Harvester&rsquo;s dashboard (accessing via Harvester&rsquo;s VIP)</li> </ol> Node promotion for topology label https://harvester.github.io/tests/manual/_incoming/2325-node-promotion-for-topology-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2325-node-promotion-for-topology-label/ - Related issues: #2325 [FEATURE] Harvester control plane should spread across failure domains Category: Host Verification Steps Install first node, the role of this node should be Management Node Install second node, the role of this node should be Compute Node, the second node shouldn&rsquo;t be promoted to Management Node Add label topology.kubernetes.io/zone=zone1 to the first node Install third node, the second node and third node shouldn&rsquo;t be promoted Add label topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2325">#2325</a> [FEATURE] Harvester control plane should spread across failure domains</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install first node, the role of this node should be Management Node</li> <li>Install second node, the role of this node should be Compute Node, the second node shouldn&rsquo;t be promoted to Management Node</li> <li>Add label topology.kubernetes.io/zone=zone1 to the first node</li> <li>Install third node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Add label topology.kubernetes.io/zone=zone1 to the second node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Add label topology.kubernetes.io/zone=zone3 to the third node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Change the value of label topology.kubernetes.io/zone from zone1 to zone2 in the second node, the second node and third node will be promoted to Management Node one by one</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Checked can pass the following test scenarios.</p> Nodes with cordoned status should not be in VM migration list https://harvester.github.io/tests/manual/hosts/nodes-with-cordoned-status-should-not-be-in-vm-migration-list/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/nodes-with-cordoned-status-should-not-be-in-vm-migration-list/ - Related issues: #1501 Nodes with cordoned status should not be in the selection list for VM migration Category: Host Verification Steps Create multiple VMs on two of the nodes Set the idle node to cordoned state Edit any config of VM, click migrate Check the available node in the migration list Expected Results Node set in cordoned state will not show up in the available migration list + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1501">#1501</a> Nodes with cordoned status should not be in the selection list for VM migration</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create multiple VMs on two of the nodes</li> <li>Set the idle node to cordoned state</li> <li>Edit any config of VM, click migrate</li> <li>Check the available node in the migration list</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Node set in cordoned state will not show up in the available migration list</p> PCI Devices Controller https://harvester.github.io/tests/manual/advanced/addons/pci-devices-controller/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/pci-devices-controller/ - Pre-requisite Enable PCI devices Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC Go to the management interface of the new cluster Go to Advanced -&gt; PCI Devices Validate that the PCI devices aren&rsquo;t enabled Click the link to enable PCI devices Enable PCI devices in the linked addon page Wait for the status to change to Deploy successful Navigate to the PCI devices page Validate that the PCI devices page is populated/populating with PCI devices Case 1 (PCI NIC passthrough) Create a harvester cluster in bare metal mode. + <h2 id="pre-requisite-enable-pci-devices">Pre-requisite Enable PCI devices</h2> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Validate that the PCI devices aren&rsquo;t enabled</li> <li>Click the link to enable PCI devices</li> <li>Enable PCI devices in the linked addon page</li> <li>Wait for the status to change to Deploy successful</li> <li>Navigate to the PCI devices page</li> <li>Validate that the PCI devices page is populated/populating with PCI devices</li> </ol> <h2 id="case-1-pci-nic-passthrough">Case 1 (PCI NIC passthrough)</h2> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Check the box representing the PCI NIC device (identify it by the Description or the VendorId/DeviceId combination)</li> <li>Click Enable Passthrough</li> <li>When the NIC device is in an Enabled state, create a VM</li> <li>After creating the VM, edit the Config</li> <li>In the &ldquo;PCI Devices&rdquo; section, click the &ldquo;Available PCI Devices&rdquo; dropdown</li> <li>Select the PCI NIC device that has been enabled for passthrough</li> <li>Click Save</li> <li>Start the VM</li> <li>Once the VM is booted, run <code>lspci</code> at the command line (make sure the VM has the <code>pciutils</code> package) and verify that the PCI NIC device shows up</li> <li>(Optional) Install the driver for your PCI NIC device (if it hasn&rsquo;t been autoloaded)</li> </ol> <h3 id="case-1-dependencies">Case 1 dependencies:</h3> <ul> <li>PCI NIC separate from management network</li> <li>Enable PCI devices</li> </ul> <h2 id="case-2-gpu-passthrough">Case 2 (GPU passthrough)</h2> <h3 id="case-2-1-add-gpu">Case 2-1 Add GPU</h3> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has a GPU separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Check the box representing the GPU device (identify it by the Description or the VendorId/DeviceId combination)</li> <li>Click Enable Passthrough</li> <li>When the GPU device is in an Enabled state, create a VM</li> <li>After creating the VM, edit the Config</li> <li>In the &ldquo;PCI Devices&rdquo; section, click the &ldquo;Available PCI Devices&rdquo; dropdown</li> <li>Select the GPU device that has been enabled for passthrough</li> <li>Click Save</li> <li>Start the VM</li> <li>Once the VM is booted, run <code>lspci</code> at the command line (make sure the VM has the <code>pciutils</code> package) and verify that the GPU device shows up</li> <li>Install the driver for your GPU device <ol> <li>if the device is from NVIDIA: (this is for ubuntu, but the opensuse installation instructions are <a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#suse-installation">here</a>) <pre tabindex="0"><code>wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update sudo apt-get -y install cuda nvidia-cuda-toolkit build-essential </code></pr Polish harvester machine config in Rancher https://harvester.github.io/tests/manual/_incoming/2598-polish-harvester-machine-config-in-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2598-polish-harvester-machine-config-in-rancher/ - Related issues: #2598 [BUG]Polish harvester machine config Category: Rancher integration Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Create cloud credential of Harvester Login with local user Open the provisioning RKE2 cluster page Select Advanced settings Add Pod Scheduling Select Pods in these namespaces Check the list of available pods with the namespaces options above Check can input Topology key value Access Harvester UI (Not from Rancher) Open project/namespace Create several namespaces Login local user to Rancher Open the the provisioning RKE2 cluster page Check the available Pods in these namespaces list have been updated Expected Results Checked the following test plan for RKE2 cluster are working as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2598">#2598</a> [BUG]Polish harvester machine config</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user <code>local</code> in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add <code>local</code> user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Create cloud credential of Harvester</li> <li>Login with <code>local</code> user</li> <li>Open the provisioning RKE2 cluster page</li> <li>Select Advanced settings</li> <li>Add Pod Scheduling</li> <li>Select <code>Pods in these namespaces</code></li> <li>Check the list of available pods with the namespaces options above</li> <li>Check can input Topology key value</li> <li>Access Harvester UI (Not from Rancher)</li> <li>Open project/namespace</li> <li>Create several namespaces</li> <li>Login <code>local</code> user to Rancher</li> <li>Open the the provisioning RKE2 cluster page</li> <li>Check the available <code>Pods in these namespaces</code> list have been updated</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Checked the following test plan for <code>RKE2</code> cluster are working as expected</p> Power down a node out of three nodes available for the Cluster https://harvester.github.io/tests/manual/deployment/negative-power-off-one-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-power-off-one-node-cluster/ - Create a three nodes cluster for Harvester. Power down an added node. Expected Results On power down the node, the status of the node should become down. Harvester system system should be still up. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Power down an added node.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On power down the node, the status of the node should become down.</li> <li>Harvester system system should be still up.</li> </ol> Power down and power up the node https://harvester.github.io/tests/manual/hosts/negative-power-down-power-up-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-power-down-power-up-node/ - Create two vms on a cluster. Power down the node. Try to migrate a VM from the down node to active node. Leave the 2nd vm as it is. Power on the node Expected Results The 1st VM should be migrated to other node on manually doing it. The 2nd VM should be accessible once the node is up. Known bugs https://github.com/harvester/harvester/issues/982 + <ol> <li>Create two vms on a cluster.</li> <li>Power down the node.</li> <li>Try to migrate a VM from the down node to active node.</li> <li>Leave the 2nd vm as it is.</li> <li>Power on the node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The 1st VM should be migrated to other node on manually doing it.</li> <li>The 2nd VM should be accessible once the node is up.</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/982">https://github.com/harvester/harvester/issues/982</a></p> Power down the management node. https://harvester.github.io/tests/manual/deployment/negative-power-down-management-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-power-down-management-node/ - Create a three nodes cluster for Harvester. Power down the first node which was added to the cluster. Expected Results On power down the node, the status of the node should become down. Harvester system system should be still up. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Power down the first node which was added to the cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On power down the node, the status of the node should become down.</li> <li>Harvester system system should be still up.</li> </ol> Power down the node https://harvester.github.io/tests/manual/hosts/negative-power-down-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-power-down-node/ - Create two vms on a cluster. Power down the node. Try to migrate a VM from the down node to active node. Leave the 2nd vm as it is. Expected Results The 1st VM should be migrated to other node on manually doing it. The 2nd VM should be recovered from the lost node + <ol> <li>Create two vms on a cluster.</li> <li>Power down the node.</li> <li>Try to migrate a VM from the down node to active node.</li> <li>Leave the 2nd vm as it is.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The 1st VM should be migrated to other node on manually doing it.</li> <li>The 2nd VM should be recovered from the lost node</li> </ol> Power node triggers VM reschedule https://harvester.github.io/tests/manual/hosts/vm_rescheduled_after_host_poweroff/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/vm_rescheduled_after_host_poweroff/ - Ref: N/A, legacy test case, VM is not migrated but rescheduled Criteria VM should created and started successfully Node should be unavailable after shutdown VM should be restarted automatically Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Power off the node hosting vm1 the node should becomes unavailable on dashboard VM vm1 should be restarted automatically after vm-force-reset-policy seconds + <p>Ref: N/A, legacy test case, VM is not migrated but rescheduled</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable after shutdown</li> <li><input checked="" disabled="" type="checkbox"> VM should be restarted automatically</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Power off the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li>VM <code>vm1</code> should be restarted automatically after <code>vm-force-reset-policy</code> seconds</li> </ol> Press the Enter key in setting field shouldn't refresh page https://harvester.github.io/tests/manual/_incoming/2569-press-enter-settings-should-not-refresh-page-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2569-press-enter-settings-should-not-refresh-page-copy/ - Related issues: #2569 [BUG] Press the Enter key, the page will be refreshed automatically Category: Settings Verification Steps Check every page have input filed in the Settings page Move cursor to any input field Click the Enter button Check the page will not be automatically loaded Expected Results On v1.0.3 backport, when we press the Enter key in the following page fields, it will not being refreshed automatically. Also checked the following pages + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2569">#2569</a> [BUG] Press the Enter key, the page will be refreshed automatically</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Settings</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Check every page have input filed in the Settings page</li> <li>Move cursor to any input field</li> <li>Click the <code>Enter</code> button</li> <li>Check the page will not be automatically loaded</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>On v1.0.3 backport, when we press the <code>Enter</code> key in the following page fields, it will not being refreshed automatically.</p> Prevent normal users create harvester-public namespace https://harvester.github.io/tests/manual/_incoming/2485-prevent-normal-user-create-harvesterpublic-ns/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2485-prevent-normal-user-create-harvesterpublic-ns/ - Related issues: #2485 [FEATURE] [Harvester Node Driver v2] Prevent normal users from creating VMs in harvester-public namespace Category: Rancher integration Verification Steps Import Harvester from Rancher Create standard user in Rancher User &amp; Authentication Edit Harvester in virtualization Management, assign Cluster Member role to user Login with user Create cloud credential Provision an RKE2 cluster Check the namespace dropdown list Expected Results Now the standard user with cluster member rights won&rsquo;t display harvester-public while user node driver to provision the RKE2 cluster. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2485">#2485</a> [FEATURE] [Harvester Node Driver v2] Prevent normal users from creating VMs in harvester-public namespace</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create standard <code>user</code> in Rancher User &amp; Authentication</li> <li>Edit Harvester in virtualization Management, assign Cluster Member role to user <img src="https://user-images.githubusercontent.com/29251855/191748214-50fd7290-e2ae-4910-9a27-c9b67c581886.png" alt="image"></li> <li>Login with user</li> <li>Create cloud credential</li> <li>Provision an RKE2 cluster</li> <li>Check the namespace dropdown list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Now the standard user with cluster member rights won&rsquo;t display <code>harvester-public</code> while user node driver to provision the RKE2 cluster.</p> Project owner role on customized project open Harvester cluster https://harvester.github.io/tests/manual/_incoming/2394-2395-project-owner-customized-project-open-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2394-2395-project-owner-customized-project-open-harvester/ - Related issues: #2394 [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error Related issues: #2395 [backport v1.0] [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error Category: Rancher integration Verification Steps Import Harvester from Rancher Access Harvester on virtualization management page Create a project test and namespace test under it Go to user authentication page Create a stand rancher user test Access Harvester in Rancher Set project owner role of test project to test user Login Rancher with test user Access the virtualization management page Expected Results Now the standard user with project owner role can access harvester in virtualization management page correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2394">#2394</a> [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2395">#2395</a> [backport v1.0] [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Access Harvester on virtualization management page</li> <li>Create a project test and namespace test under it</li> <li>Go to user authentication page</li> <li>Create a stand rancher user test</li> <li>Access Harvester in Rancher</li> <li>Set project owner role of test project to test user</li> <li>Login Rancher with test user</li> <li>Access the virtualization management page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Now the standard user with project owner role can access harvester in virtualization management page correctly <img src="https://user-images.githubusercontent.com/29251855/174706597-f98ecc41-b479-4e5b-b163-02f43c1c6138.png" alt="image"></li> </ul> Project owner should not see additional alert https://harvester.github.io/tests/manual/_incoming/2288-2350-project-owner-should-not-see-alert-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2288-2350-project-owner-should-not-see-alert-copy/ - Related issues: #2288 [BUG] The project-owner user will see an additional alert Related issues: #2350 [Backport v1.0] The project-owner user will see an additional alert Category: Rancher integration Verification Steps Importing a harvester cluster in a rancher cluster enter the imported harvester cluster from the Virtualization Management page create a new Project (test), Create a test namespace in the test project. go to Network page, add vlan 1 create a vm, choose test namespace, choose vlan network, click save create a new user (test), choose Standard User go to the project page, edit test Project, set test user to Project Owner。 login again with test user go to the vm page Expected Results Use rancher standard user test with project owner permission to access Harvester. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2288">#2288</a> [BUG] The project-owner user will see an additional alert</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2350">#2350</a> [Backport v1.0] The project-owner user will see an additional alert</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Importing a harvester cluster in a rancher cluster</li> <li>enter the imported harvester cluster from the <code>Virtualization Management</code> page</li> <li>create a new Project (test), Create a test namespace in the test project.</li> <li>go to <code>Network</code> page, add <code>vlan 1</code></li> <li>create a vm, choose <code>test namespace</code>, choose <code>vlan network</code>, click save</li> <li>create a new user (test), choose <code>Standard User</code></li> <li>go to the <code>project page</code>, edit <code>test</code> Project, set <code>test</code> user to Project Owner。 <!-- raw HTML omitted --></li> <li>login again with <code>test user</code></li> <li>go to the vm page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Use rancher standard user <code>test</code> with project owner permission to access Harvester. Now there is no error alert on the created VM with vlan1 network <img src="https://user-images.githubusercontent.com/29251855/174733151-c8bcffdd-50e0-404e-a5b6-a9ff2f1a7387.png" alt="image"></li> </ul> Promote remaining host when delete one https://harvester.github.io/tests/manual/_incoming/2191-promote-remaining-host-when-delete-one/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2191-promote-remaining-host-when-delete-one/ - Related issues: #2191 [BUG] Promote fail, cluster stays in Provisioning phase Category: Host Verification Steps Create a 4-node Harvester cluster. Wait for three nodes to become control plane nodes (role is control-plane,etcd,master). Delete one of the control plane nodes. The remaining worker node should be promoted to a control plane node (role is control-plane,etcd,master). Expected Results Four nodes Harvester cluster status, before delete one of the control-plane node n1-221021:/etc # kubectl get nodes NAME STATUS ROLES AGE VERSION n1-221021 Ready control-plane,etcd,master 17h v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2191">#2191</a> [BUG] Promote fail, cluster stays in Provisioning phase</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a 4-node Harvester cluster.</li> <li>Wait for three nodes to become control plane nodes (role is control-plane,etcd,master).</li> <li>Delete one of the control plane nodes.</li> <li>The remaining worker node should be promoted to a control plane node (role is control-plane,etcd,master).</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Four nodes Harvester cluster status, before delete one of the control-plane node</p> Provision RKE2 cluster with resource quota configured https://harvester.github.io/tests/manual/harvester-rancher/provision-rke2-cluster-with-resource-quota-configured/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/provision-rke2-cluster-with-resource-quota-configured/ - Related issues: #1455 Node driver provisioning fails when resource quota configured in project Related issues: #1449 Incorrect naming of project resource configuration Category: Rancher Integration Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Test Scenarios Scenario 1: Project with resource quota: CPU Limit / CPU Reservation: 6000 / 6144 Memory Limit / Memory Reservation: 6000 / 6144 Scenario 2: + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1455">#1455</a> Node driver provisioning fails when resource quota configured in project</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1449">#1449</a> Incorrect naming of project resource configuration</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="test-scenarios">Test Scenarios</h2> <ul> <li> <p>Scenario 1:</p> PXE instll without iso_url field https://harvester.github.io/tests/manual/deployment/1439-pxe-install-without-iso-url-field/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1439-pxe-install-without-iso-url-field/ - Related issues: #1439 PXE boot installation doesn&rsquo;t give an error if iso_url field is missing Environment setup This is easiest to test with the vagrant setup at https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester edit https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27 to be blank Verification Steps Run the vagrant ./setup.sh from the vagrant repo Expected Results You should get an error in the console for the VM when installing + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1439">#1439</a> PXE boot installation doesn&rsquo;t give an error if iso_url field is missing</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This is easiest to test with the vagrant setup at <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></p> <ol> <li>edit <a href="https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27">https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27</a> to be blank</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Run the vagrant <code>./setup.sh</code> from the vagrant repo</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error in the console for the VM when installing</li> </ol> PXE instll without iso_url field https://harvester.github.io/tests/manual/hosts/1439-pxe-install-without-iso-url-field/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1439-pxe-install-without-iso-url-field/ - Related issues: #1439 PXE boot installation doesn&rsquo;t give an error if iso_url field is missing Environment setup This is easiest to test with the vagrant setup at https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester edit https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27 to be blank Verification Steps Run the vagrant ./setup.sh from the vagrant repo Expected Results You should get an error in the console for the VM when installing + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1439">#1439</a> PXE boot installation doesn&rsquo;t give an error if iso_url field is missing</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This is easiest to test with the vagrant setup at <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></p> <ol> <li>edit <a href="https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27">https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27</a> to be blank</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Run the vagrant <code>./setup.sh</code> from the vagrant repo</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error in the console for the VM when installing</li> </ol> Rancher Resource quota management https://harvester.github.io/tests/manual/harvester-rancher/resource_quota/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/resource_quota/ - Ref: https://github.com/harvester/harvester/issues/1450 Verify Items Project&rsquo;s Resource quotas can be updated correctly Namespace Default Limit should be assigned as the Project configured Namespace moving between projects should work correctly Case: Create Namespace with Resource quotas Install Harvester with any nodes Install Rancher Login to Rancher, import Harvester from Virtualization Management Access Harvester dashboard via Virtualization Management Navigate to Project/Namespaces, Create Project A with Resource quotas Create Namespace N1 based on Project A The Default value of Resource Quotas should be the same as Namespace Default Limit assigned in Project A Modifying resource limit should work correctly (when increasing/decreasing, the value should increased/decreased) After N1 Created, Click Edit Config on N1 resource limit should be the same as we assigned Increase/decrease resource limit then Save Click Edit Config on N1, resource limit should be the same as we assigned Click Edit Config on N1, then increase resource limit exceeds Project A&rsquo;s Limit Click Save Button, error message should shown. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1450">https://github.com/harvester/harvester/issues/1450</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Project&rsquo;s Resource quotas can be updated correctly</li> <li><strong>Namespace Default Limit</strong> should be assigned as the Project configured</li> <li>Namespace moving between projects should work correctly</li> </ul> <h2 id="case-create-namespace-with-resource-quotas">Case: Create Namespace with Resource quotas</h2> <ol> <li>Install Harvester with any nodes</li> <li>Install Rancher</li> <li>Login to Rancher, import Harvester from <em>Virtualization Management</em></li> <li>Access Harvester dashboard via <em>Virtualization Management</em></li> <li>Navigate to <em>Project/Namespaces</em>, Create Project <code>A</code> with Resource quotas</li> <li>Create Namespace <code>N1</code> based on Project <code>A</code></li> <li>The Default value of Resource Quotas should be the same as <strong>Namespace Default Limit</strong> assigned in Project <code>A</code></li> <li>Modifying <strong>resource limit</strong> should work correctly (when increasing/decreasing, the value should increased/decreased)</li> <li>After <code>N1</code> Created, Click <strong>Edit Config</strong> on <code>N1</code></li> <li><strong>resource limit</strong> should be the same as we assigned</li> <li>Increase/decrease <strong>resource limit</strong> then Save</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, <strong>resource limit</strong> should be the same as we assigned</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, then increase <strong>resource limit</strong> exceeds Project <code>A</code>&rsquo;s Limit</li> <li>Click <strong>Save</strong> Button, error message should shown.</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, then change the <strong>Project</strong> to <code>Default</code></li> <li>The Namespace <code>N1</code> should be moved to Project <code>Default</code></li> </ol> rancher-monitoring status when hosting NODE down https://harvester.github.io/tests/manual/_incoming/2243-rancher-monitoring-status-when-hosting-node-down/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2243-rancher-monitoring-status-when-hosting-node-down/ - Related issues: #2243 [BUG] rancher-monitoring is unusable when hosting NODE is (accidently) down Category: Monitoring Verification Steps Install a two nodes harvester cluster Check the Initial state of the 2 nodes Harvester cluster harv-node1-0719:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION harv-node1-0719 Ready control-plane,etcd,master 36m v1.21.11+rke2r1 harv-node2-0719 Ready &lt;none&gt; harv-node1-0719:~ # kubectl get pods -A | grep monitoring cattle-monitoring-system prometheus-rancher-monitoring-prometheus-0 3/3 Running 0 33m cattle-monitoring-system rancher-monitoring-grafana-d9c56d79b-ckbjc 3/3 Running 0 33m harv-node1-0719:~ # kubectl get pods prometheus-rancher-monitoring-prometheus-0 -n cattle-monitoring-system -o yaml | grep nodeName nodeName: harv-node1-0719 Power off both nodes + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2243">#2243</a> [BUG] rancher-monitoring is unusable when hosting NODE is (accidently) down</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Monitoring</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install a two nodes harvester cluster</li> <li>Check the Initial state of the 2 nodes Harvester cluster</li> </ol> <pre tabindex="0"><code>harv-node1-0719:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION harv-node1-0719 Ready control-plane,etcd,master 36m v1.21.11+rke2r1 harv-node2-0719 Ready &lt;none&gt; harv-node1-0719:~ # kubectl get pods -A | grep monitoring cattle-monitoring-system prometheus-rancher-monitoring-prometheus-0 3/3 Running 0 33m cattle-monitoring-system rancher-monitoring-grafana-d9c56d79b-ckbjc 3/3 Running 0 33m harv-node1-0719:~ # kubectl get pods prometheus-rancher-monitoring-prometheus-0 -n cattle-monitoring-system -o yaml | grep nodeName nodeName: harv-node1-0719 </code></pr RBAC Cluster Owner https://harvester.github.io/tests/manual/_incoming/2626-local-cluster-0owner/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2626-local-cluster-0owner/ - Related issues: #2626 [BUG] Access Harvester project/namespace page hangs with no response timeout with local owner role from Rancher Category: Authentication Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Logout Admin Login with local user Access Harvester from virtualization management Click the Project/Namespace page Expected Results Local owner role user can access and display Harvester project/namespace place correctly without hanging to timeout + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2626">#2626</a> [BUG] Access Harvester project/namespace page hangs with no response timeout with local owner role from Rancher</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Authentication</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user local in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add local user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Logout Admin</li> <li>Login with local user</li> <li>Access Harvester from virtualization management</li> <li>Click the Project/Namespace page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Local owner role user can access and display Harvester project/namespace place correctly without hanging to timeout</li> </ol> RBAC Create VM with restricted admin user https://harvester.github.io/tests/manual/_incoming/2587-2116-create-vm-with-restricted-admin/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2587-2116-create-vm-with-restricted-admin/ - Related issues: #2587 [BUG] namespace on create VM is wrong when going through Rancher #2116 [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester Category: Authentication Verification Steps Verification Steps Import Harvester into Rancher Create a restricted admin Navigate to Volumes page Verify you only see associated Volumes Log out of admin and log in to restricted admin Navigate to Harvester UI via virtualization management Open virtual machines tab Click create Verified that namespace was default. + <ul> <li>Related issues:</li> </ul> <ul> <li><a href="https://github.com/harvester/harvester/issues/2587">#2587</a> [BUG] namespace on create VM is wrong when going through Rancher</li> <li><a href="https://github.com/harvester/harvester/issues/2116">#2116</a> [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Authentication</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Verification Steps</li> <li>Import Harvester into Rancher</li> <li>Create a restricted admin</li> <li>Navigate to Volumes page</li> <li>Verify you only see associated Volumes</li> <li>Log out of admin and log in to restricted admin</li> <li>Navigate to Harvester UI via virtualization management</li> <li>Open virtual machines tab</li> <li>Click create</li> <li>Verified that namespace was default.</li> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create with no errors</li> </ol> Reboot a cluster and check VIP https://harvester.github.io/tests/manual/harvester-rancher/1669-reboot-cluster-check-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1669-reboot-cluster-check-vip/ - Related issues: #1669 Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent) Verification Steps Enable VLAN with NIC harvester-mgmt Create VLAN 1 Disable VLAN Enable VLAN again shutdown node 3, 2, 1 server machine Wait for 15 minutes Power on node 1 server machine, wait for 20 seconds Power on node 2 server machine, wait for 20 seconds Power on node 3 server machine Check if you can access VIP and each node IP Expected Results VIP should load the page and show on every node in the terminal + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1669">#1669</a> Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent)</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable VLAN with NIC harvester-mgmt</li> <li>Create VLAN 1</li> <li>Disable VLAN</li> <li>Enable VLAN again</li> <li>shutdown node 3, 2, 1 server machine</li> <li>Wait for 15 minutes</li> <li>Power on node 1 server machine, wait for 20 seconds</li> <li>Power on node 2 server machine, wait for 20 seconds</li> <li>Power on node 3 server machine</li> <li>Check if you can access VIP and each node IP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should load the page and show on every node in the terminal</li> </ol> Reboot a cluster and check VIP https://harvester.github.io/tests/manual/hosts/1669-reboot-cluster-check-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1669-reboot-cluster-check-vip/ - Related issues: #1669 Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent) Verification Steps Enable VLAN with NIC harvester-mgmt Create VLAN 1 Disable VLAN Enable VLAN again shutdown node 3, 2, 1 server machine Wait for 15 minutes Power on node 1 server machine, wait for 20 seconds Power on node 2 server machine, wait for 20 seconds Power on node 3 server machine Check if you can access VIP and each node IP Expected Results VIP should load the page and show on every node in the terminal + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1669">#1669</a> Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent)</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable VLAN with NIC harvester-mgmt</li> <li>Create VLAN 1</li> <li>Disable VLAN</li> <li>Enable VLAN again</li> <li>shutdown node 3, 2, 1 server machine</li> <li>Wait for 15 minutes</li> <li>Power on node 1 server machine, wait for 20 seconds</li> <li>Power on node 2 server machine, wait for 20 seconds</li> <li>Power on node 3 server machine</li> <li>Check if you can access VIP and each node IP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should load the page and show on every node in the terminal</li> </ol> Reboot host that is in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-reboot-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-reboot-host/ - For Host that is in maintenance mode and turned on Reboot host Expected Results Host should reboot Maintenance mode label in hosts list should go from yellow to red to yellow Known Bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>For Host that is in maintenance mode and turned on</li> <li>Reboot host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should reboot</li> <li>Maintenance mode label in hosts list should go from yellow to red to yellow</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> Reboot host trigger VM migration https://harvester.github.io/tests/manual/hosts/vm_migrated_after_host_reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/vm_migrated_after_host_reboot/ - Ref: N/A, legacy test case Criteria VM should created and started successfully Node should be unavailable while rebooting VM should be migrated to ohter node Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Reboot the node hosting vm1 the node should becomes unavailable on dashboard vm1 should be automatically migrated to another node + <p>Ref: N/A, legacy test case</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable while rebooting</li> <li><input checked="" disabled="" type="checkbox"> VM should be migrated to ohter node</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Reboot the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li><code>vm1</code> should be automatically migrated to another node</li> </ol> Reboot node https://harvester.github.io/tests/manual/hosts/negative-reboot-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-reboot-node/ - Create a vm on the cluster. Reboot the node where the vm exists. Reboot the node where there is no vm Expected Results On rebooting the node, once the node is back up and Harvester is started, the host should become available on the cluster. + <ol> <li>Create a vm on the cluster.</li> <li>Reboot the node where the vm exists.</li> <li>Reboot the node where there is no vm</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On rebooting the node, once the node is back up and Harvester is started, the host should become available on the cluster.</li> </ol> Reboot the management node/added node. https://harvester.github.io/tests/manual/deployment/negative-reboot-management-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-reboot-management-node/ - Create a three nodes cluster for Harvester. Reboot the management node/added node. Expected Results Once the node is up after reboot, the node should become available in the cluster. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Reboot the management node/added node.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Once the node is up after reboot, the node should become available in the cluster.</li> </ol> Recover cordon and maintenace node after harvester node machine reboot https://harvester.github.io/tests/manual/hosts/recover-cordon-or-maintenace-node-after-harvester-node-machine-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/recover-cordon-or-maintenace-node-after-harvester-node-machine-reboot/ - Related issues: #1493 When hosts are stuck in maintenance mode and the cluster is unstable you can&rsquo;t access the UI Category: Host Verification Steps Create 3 virtual machine on 3 harvester nodes Cordon 1st and 2nd node, Enable maintenance mode on 1st and 2nd node We can&rsquo;t cordon and enable maintenance node on the remaining node Reboot 1st and 2nd node bare machine Wait for harvester machine back to service Login dashboard Disable maintenance mode on 1st and 2nd node Expected Results Cordon node and enter maintenance mode, after machine reboot, user can login harvester dashboard. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1493">#1493</a> When hosts are stuck in maintenance mode and the cluster is unstable you can&rsquo;t access the UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create 3 virtual machine on 3 harvester nodes</li> <li>Cordon 1st and 2nd node, <img src="https://user-images.githubusercontent.com/29251855/141106858-cdfb35f3-50af-48d0-b776-1f1cc5dfcedc.png" alt="image"></li> <li>Enable maintenance mode on 1st and 2nd node <img src="https://user-images.githubusercontent.com/29251855/141106968-e4d7a6be-6c60-4771-aabd-8df0ccafe252.png" alt="image"></li> <li>We can&rsquo;t cordon and enable maintenance node on the remaining node <img src="https://user-images.githubusercontent.com/29251855/141107044-774166b8-117e-4635-b8a2-eeedb65e48fc.png" alt="image"></li> <li>Reboot 1st and 2nd node bare machine</li> <li>Wait for harvester machine back to service</li> <li>Login dashboard</li> <li>Disable maintenance mode on 1st and 2nd node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Cordon node and enter maintenance mode, after machine reboot, user can login harvester dashboard.</li> <li>Node remain it&rsquo;s original status</li> <li>Can disable/uncordon node, it can back to original status <img src="https://user-images.githubusercontent.com/29251855/141111698-64d9d648-9018-4c14-8828-539f6e44361e.png" alt="image"></li> </ol> Reinstall agent node https://harvester.github.io/tests/manual/_incoming/2665-2892-reinstall-agent-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2665-2892-reinstall-agent-node/ - Related issues: #2665 [BUG] reinstall 1st node Related issues: #2892 [BUG] rancher-system-agent keeps showing error on a new node in an upgraded cluster Category: Host Verification Steps Test Plan 1: Reinstall management node and agent node in a upgraded cluster Create a 4-node v1.0.3 cluster. Upgrade the master branch: Check the spec content in provisioning.cattle.io/v1/clusters -&gt; fleet-local Check the iface content in helm.cattle.io/v1/helmchartconfigs -&gt; rke2-canal spec: │ │ valuesContent: |- │ │ flannel: │ │ iface: &#34;&#34; Remove the agent node and 1 management node. + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2665">#2665</a> [BUG] reinstall 1st node</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2892">#2892</a> [BUG] rancher-system-agent keeps showing error on a new node in an upgraded cluster</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-reinstall-management-node-and-agent-node-in-a-upgraded-cluster">Test Plan 1: Reinstall management node and agent node in a upgraded cluster</h3> <ol> <li> <p>Create a 4-node v1.0.3 cluster.</p> </li> <li> <p>Upgrade the master branch:</p> </li> <li> <p>Check the spec content in <code>provisioning.cattle.io/v1/clusters -&gt; fleet-local</code> <img src="https://user-images.githubusercontent.com/29251855/196139161-7b6e6e84-692d-4f4f-a978-62fc50f64f06.png" alt="image"></p> Rejoin node machine after Harvester upgrade https://harvester.github.io/tests/manual/upgrade/rejoin-node-after-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/rejoin-node-after-upgrade/ - Related issues: #2655 [BUG] reinstall 1st node Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk Verification Steps Create a 3 nodes v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2655">#2655</a> [BUG] reinstall 1st node</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create a 3 nodes v1.0.3 Harvester cluster.</p> Remove a management node from a 3 nodes cluster and add it back to the cluster by reinstalling it https://harvester.github.io/tests/manual/hosts/remove-management-node-then-reinstall/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/remove-management-node-then-reinstall/ - From a HA cluster with 3 nodes Delete one of the nodes after the node promotion(all 3 nodes are management nodes) Reinstall the removed node with the same node name and IP The rejoined node will be promoted to master automatically Expected Results The removed node should be able to rejoin the cluster without issues Comments Purpose is to cover this scenario: https://github.com/harvester/harvester/issues/1040 Check the job promotion with the command kubectl get jobs -n harvester-system If a node is stuck in the removing status, you likely face to this issue, execute this command as workaround: kubectl get node -o name &lt;nodename&gt; | xargs -i kubectl patch {} -p '{&quot;metadata&quot;:{&quot;finalizers&quot;:[]}}' --type=merge + <ol> <li>From a HA cluster with 3 nodes</li> <li>Delete one of the nodes after the node promotion(all 3 nodes are management nodes)</li> <li>Reinstall the removed node with the same node name and IP</li> <li>The rejoined node will be promoted to master automatically</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The removed node should be able to rejoin the cluster without issues</li> </ol> <h3 id="comments">Comments</h3> <ul> <li>Purpose is to cover this scenario: <a href="https://github.com/harvester/harvester/issues/1040">https://github.com/harvester/harvester/issues/1040</a></li> <li>Check the job promotion with the command kubectl get jobs -n harvester-system</li> <li>If a node is stuck in the removing status, you likely face to this issue, execute this command as workaround: <code>kubectl get node -o name &lt;nodename&gt; | xargs -i kubectl patch {} -p '{&quot;metadata&quot;:{&quot;finalizers&quot;:[]}}' --type=merge</code></li> </ul> Remove a node from the existing cluster https://harvester.github.io/tests/manual/deployment/remove-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/remove-node-cluster/ - Remove node from the Harvester cluster using the Harvester UI Expected Results The components of Harvester should get cleaned up from the node. + <ol> <li>Remove node from the Harvester cluster using the Harvester UI</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The components of Harvester should get cleaned up from the node.</p> Remove Pod Scheduling from harvester rke2 and rke1 https://harvester.github.io/tests/manual/_incoming/2642-remove-pod-scheduling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2642-remove-pod-scheduling/ - Related issues: #2642 [BUG] Remove Pod Scheduling from harvester rke2 and rke1 Category: Rancher Test Information Test Environment: 1 node harvester on local kvm machine Harvester version: v1.0-44fb5f1a-head (08/10) Rancher version: v2.6.7-rc7 Environment Setup Prepare Harvester master node Prepare Rancher v2.6.7-rc7 Import Harvester to Rancher Set ui-offline-preferred: Remote Go to Harvester Support page Download Kubeconfig Copy the content of Kubeconfig Verification Steps RKE2 Verification Steps Open Harvester Host page then edit host config Add the following key value in the labels page: topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2642">#2642</a> [BUG] Remove Pod Scheduling from harvester rke2 and rke1</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher</li> </ul> <h2 id="test-information">Test Information</h2> <p>Test Environment: 1 node harvester on local kvm machine Harvester version: v1.0-44fb5f1a-head (08/10) Rancher version: v2.6.7-rc7</p> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare Harvester master node</li> <li>Prepare Rancher v2.6.7-rc7</li> <li>Import Harvester to Rancher</li> <li>Set ui-offline-preferred: Remote</li> <li>Go to Harvester Support page</li> <li>Download Kubeconfig</li> <li>Copy the content of Kubeconfig</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h3 id="rke2-verification-steps">RKE2 Verification Steps</h3> <ol> <li>Open Harvester Host page then edit host config</li> <li>Add the following key value in the labels page: <ul> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp <img src="https://user-images.githubusercontent.com/29251855/183802450-a790b9a2-3e2c-4559-8f84-b5a768b9c83d.png" alt="image"></li> </ul> </li> <li>Open the RKE2 provisioning page</li> <li>Expand the show advanced</li> <li>Click add Node selector in Node scheduling</li> <li>Use default Required priority</li> <li>Click Add Rule</li> <li>Provide the following key/value pairs</li> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp</li> <li>Provide the following user data <pre tabindex="0"><code>password: 123456 chpasswd: { expire: False } ssh_pwauth: True </code></pr Remove unavailable node with VMs on it https://harvester.github.io/tests/manual/hosts/negative-remove-unavailable-node-with-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-remove-unavailable-node-with-vm/ - Create VMs on a host. Turn off Host Remove Host from hosts list Expected Results VMs should migrate to new host Known Bugs https://github.com/harvester/harvester/issues/983 + <ol> <li>Create VMs on a host.</li> <li>Turn off Host</li> <li>Remove Host from hosts list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VMs should migrate to new host</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/983">https://github.com/harvester/harvester/issues/983</a></p> Restart Button Web VNC window https://harvester.github.io/tests/manual/_incoming/379-restart-button-web-vnc-window/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/379-restart-button-web-vnc-window/ - Related issues: #379 [Question] Restart Button Web VNC window Category: VM Verification Steps Create a new VM with Ubuntu desktop 20.04 Prepare two volume Complete the installation process Open a web browser on Ubuntu desktop Check the shortcut keys combination Expected Results The soft reboot keys can display and reboot correctly on Linux OS VM (Ubuntu desktop 20.04) + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/379">#379</a> [Question] Restart Button Web VNC window</li> </ul> <h2 id="category">Category:</h2> <ul> <li>VM</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new VM with Ubuntu desktop 20.04</li> <li>Prepare two volume</li> <li>Complete the installation process</li> <li>Open a web browser on Ubuntu desktop</li> <li>Check the shortcut keys combination</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The soft reboot keys can display and reboot correctly on Linux OS VM (Ubuntu desktop 20.04) <img src="https://user-images.githubusercontent.com/29251855/177100026-e67d0101-0a5b-433c-b9ab-e2b4af1a8d0f.png" alt="image"></li> </ol> Restart/Stop VM with in progress Backup https://harvester.github.io/tests/manual/_incoming/1702-do-not-allow-restart-or-stop-vm-when-backup-is-in-progress/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1702-do-not-allow-restart-or-stop-vm-when-backup-is-in-progress/ - Related issues: #1702 Don&rsquo;t allow restart/stop vm when backup is in progress Verification Steps Create a VM. Create a VMBackup for it. Before VMBackup is done, stop/restart the VM. Verify VM can&rsquo;t be stopped/restarted. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1702">#1702</a> Don&rsquo;t allow restart/stop vm when backup is in progress</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM.</li> <li>Create a VMBackup for it.</li> <li>Before VMBackup is done, stop/restart the VM. Verify VM can&rsquo;t be stopped/restarted.</li> </ol> Restore backup create new vm (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm/ - Create a new file before restoring the backup and add some data Stop the VM where the backup was taken Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Create a new file before restoring the backup and add some data</li> <li>Stop the VM where the backup was taken</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore backup create new vm in another namespace https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm-in-another-namespace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm-in-another-namespace/ - Create a VM vm in namespace default. Create a file ~/test.txt with content test. Create a VMBackup default-vm-backup for it. Create a new namepsace new-ns. Create a VMRestore restore-default-vm-backup-to-new-ns in new-ns namespace based on the VMBackup default-vm-backup to create a new VM. Expected Results A new VM in new-ns namespace should be created. It should have the file ~/test.txt with content test. + <ol> <li>Create a VM <code>vm</code> in namespace <code>default</code>.</li> <li>Create a file <code>~/test.txt</code> with content <code>test</code>.</li> <li>Create a VMBackup <code>default-vm-backup</code> for it.</li> <li>Create a new namepsace <code>new-ns</code>.</li> <li>Create a VMRestore <code>restore-default-vm-backup-to-new-ns</code> in <code>new-ns</code> namespace based on the VMBackup <code>default-vm-backup</code> to create a new VM.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>A new VM in <code>new-ns</code> namespace should be created.</li> <li>It should have the file <code>~/test.txt</code> with content <code>test</code>.</li> </ol> Restore Backup for VM that was live migrated (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-for-vm-live-migrated/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-for-vm-live-migrated/ - Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore backup replace existing VM with backup from same VM (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-replace-existing/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-replace-existing/ - Create a new file before restoring the backup and add some data Stop the VM Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Create a new file before restoring the backup and add some data</li> <li>Stop the VM</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore First backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-first-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-first-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 1 Validate that md5sum -c file1. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 1</li> <li>Validate that <ul> <li><code>md5sum -c file1.md5 file2.md5 file3.md5</code></li> <li>file 1 is in original format - md5sum-1</li> <li>file 2 doesn&rsquo;t exist</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Restore last backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-last-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-last-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 3 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 3</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2-2.md5 file3.md5 </code></li> <li>file 1 is in second format</li> <li>file 2 is in second format</li> <li>file 3 matches</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Restore middle backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-middle-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-middle-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 2 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5</code></li> <li>file 1 is in second format</li> <li>file 2 is in original format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> restored VM can not be cloned https://harvester.github.io/tests/manual/_incoming/2968_restored_vm_can_not_be_cloned/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2968_restored_vm_can_not_be_cloned/ - Ref: https://github.com/harvester/harvester/issues/2968 Test Information Environment: qemu/KVM 3 nodes Harvester Version: master-f96827b2-head ui-source Option: Auto Verify Steps: Follow Steps to reproduce in https://github.com/harvester/harvester/issues/2968#issue-1413026149 Additional regression test cases listed in https://github.com/harvester/tests/issues/568#issue-1414534000 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2968">https://github.com/harvester/harvester/issues/2968</a></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 3 nodes</strong></li> <li>Harvester Version: <strong>master-f96827b2-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ul> <li>Follow <strong>Steps to reproduce</strong> in <a href="https://github.com/harvester/harvester/issues/2968#issue-1413026149">https://github.com/harvester/harvester/issues/2968#issue-1413026149</a></li> <li>Additional regression test cases listed in <a href="https://github.com/harvester/tests/issues/568#issue-1414534000">https://github.com/harvester/tests/issues/568#issue-1414534000</a></li> </ul> Restored VM name does not support uppercases https://harvester.github.io/tests/manual/_incoming/4544_restored_vm_name_does_not_support_uppercases/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/4544_restored_vm_name_does_not_support_uppercases/ - Related issues: #4544 [BUG] Unable to restore backup into new VM when the name starts with upper case Category: Backup/Restore Verification Steps Setup backup-target in &lsquo;Advanced&rsquo; -&gt; &lsquo;Settings&rsquo; Create an image for VM creation Create a VM vm1 Take a VM backup vm1b Go to &lsquo;Backup &amp; Snapshot&rsquo;, restore vm1b to new VM Positive Cases Single lower Lowers Lowers contains &lsquo;.&rsquo; Lowers contains &lsquo;-&rsquo; Lowers contains &lsquo;.&rsquo; and &lsquo;-&rsquo; Negtive Cases Upper Upper infront of valid Upper append to valid Upper in the middle of valid Expected Results VM name should comply with following rules: + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/4544">#4544</a> [BUG] Unable to restore backup into new VM when the name starts with upper case</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Backup/Restore</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Setup <code>backup-target</code> in &lsquo;Advanced&rsquo; -&gt; &lsquo;Settings&rsquo;</li> <li>Create an image for VM creation</li> <li>Create a VM <code>vm1</code></li> <li>Take a VM backup <code>vm1b</code></li> <li>Go to &lsquo;Backup &amp; Snapshot&rsquo;, restore <code>vm1b</code> to new VM</li> </ol> <h3 id="positive-cases">Positive Cases</h3> <ol> <li>Single lower</li> <li>Lowers</li> <li>Lowers contains &lsquo;.&rsquo;</li> <li>Lowers contains &lsquo;-&rsquo;</li> <li>Lowers contains &lsquo;.&rsquo; and &lsquo;-&rsquo; <img src="https://user-images.githubusercontent.com/2773781/270225975-17fea11e-a266-484d-a9d4-3e3af1624d45.png" alt="image"></li> </ol> <h3 id="negtive-cases">Negtive Cases</h3> <ol> <li> <p>Upper <img src="https://github.com/harvester/harvester/assets/2773781/b2411e02-e0c1-4fef-b996-997c8c827862" alt="image"></p> Restricted admin should not see cattle-monitoring-system volumes https://harvester.github.io/tests/manual/_incoming/2116-2351-restricted-admin-no-cattle-monitoring-system-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2116-2351-restricted-admin-no-cattle-monitoring-system-volumes/ - Related issues: #2116 [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester Related issues: #2351 [Backport v1.0] You can see cattle-monitoring-system volumes as restricted admin in Harvester Category: Rancher integration Verification Steps Import Harvester to Rancher Create restricted admin in Rancher Log out of rancher Log in as restricted admin Navigate to Harvester ui in virtualization management Navigate to volumes page Expected Results Login Rancher with restricted admin and access Harvester volume page. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2116">#2116</a> [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2351">#2351</a> [Backport v1.0] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester to Rancher</li> <li>Create restricted admin in Rancher</li> <li>Log out of rancher</li> <li>Log in as restricted admin</li> <li>Navigate to Harvester ui in virtualization management</li> <li>Navigate to volumes page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Login Rancher with restricted admin and access Harvester volume page. Now it won&rsquo;t display the cattle-monitoring-system volumes. <img src="https://user-images.githubusercontent.com/29251855/174289481-00e74f70-c773-47af-847c-9ca6ecd86e1d.png" alt="image"></li> </ul> Run multiple instances of the console https://harvester.github.io/tests/manual/virtual-machines/run-multiple-instances-console/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/run-multiple-instances-console/ - Open up the console on two browsers to simulate multiple connections Login with both browsers create a new file on both instances Edit the file from the other instance and save Verify that you can see the changes from the other instance Expected Results You should be able to login from multiple browsers File should create File should update You should be able to see changes from all instances + <ol> <li>Open up the console on two browsers to simulate multiple connections</li> <li>Login with both browsers</li> <li>create a new file on both instances</li> <li>Edit the file from the other instance and save</li> <li>Verify that you can see the changes from the other instance</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to login from multiple browsers</li> <li>File should create</li> <li>File should update</li> <li>You should be able to see changes from all instances</li> </ol> Set backup target S3 (e2e_fe) https://harvester.github.io/tests/manual/advanced/set-s3-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/set-s3-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose S3 Set valid S3 target Save Expected Results login should complete Settings should save You should not get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose S3</li> <li>Set valid S3 target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should not get an error message</li> </ol> Set backup-target NFS (e2e_fe) https://harvester.github.io/tests/manual/advanced/set-nfs-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/set-nfs-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose NFS Set valid NFS target Save Expected Results login should complete Settings should save You should not get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose NFS</li> <li>Set valid NFS target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should not get an error message</li> </ol> Set backup-target NFS invalid target https://harvester.github.io/tests/manual/advanced/negative-set-invalid-nfs-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/negative-set-invalid-nfs-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose NFS Set invalid NFS target Save Expected Results login should complete Settings should save You should get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose NFS</li> <li>Set invalid NFS target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should get an error message</li> </ol> Set backup-target S3 invalid target https://harvester.github.io/tests/manual/advanced/negative-set-invalid-s3-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/negative-set-invalid-s3-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose S3 Set invalid S3 target Save Expected Results login should complete Settings should save You should get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose S3</li> <li>Set invalid S3 target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should get an error message</li> </ol> Set maintenance mode on the last available node shouldn't be allowed https://harvester.github.io/tests/manual/hosts/set-maintenance-mode-on-the-last-available-node-shouldnt-be-allowed/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/set-maintenance-mode-on-the-last-available-node-shouldnt-be-allowed/ - Related issues: #1014 Trying to set maintenance mode on the last available node shouldn&rsquo;t be allowed Category: Host Verification Steps Create 3 vms located on node2 and node3 Open host page Set node 3 into maintenance mode Wait for virtual machine migrate to node 2 Set node 2 into maintenance mode wait for virtual machine migrate to node 1 Set node 2 into maintenance mode Expected Results Within 3 nodes and 3 virtual machines testing environment. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1014">#1014</a> Trying to set maintenance mode on the last available node shouldn&rsquo;t be allowed</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create 3 vms located on node2 and node3 <img src="https://user-images.githubusercontent.com/29251855/140375836-50cfdb48-a37f-4d86-b931-04983e837cdc.png" alt="image"></p> </li> <li> <p>Open host page</p> </li> <li> <p>Set node 3 into maintenance mode</p> </li> <li> <p>Wait for virtual machine migrate to node 2</p> </li> <li> <p>Set node 2 into maintenance mode</p> Setup and test local Harvester upgrade responder https://harvester.github.io/tests/manual/_incoming/1849-setup-test-local-upgrade-responder/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1849-setup-test-local-upgrade-responder/ - Related issues: #1849 [Task] Improve Harvester upgrade responder Category: Upgrade Verification Steps Follow the steps in https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017 Clone longhorn/upgrade-responder and checkout to v0.1.4. Edit response.json content in config folder { &#34;Versions&#34;: [ { &#34;Name&#34;: &#34;v1.0.2-master-head&#34;, &#34;ReleaseDate&#34;: &#34;2022-06-15T00:00:00Z&#34;, &#34;Tags&#34;: [ &#34;latest&#34;, &#34;test&#34;, &#34;dev&#34; ] } ] } Install InfluxDB Run longhorn/upgrade-responder with the command: go run main.go --debug start --upgrade-response-config config/response.json --influxdb-url http://localhost:8086 --geodb geodb/GeoLite2-City.mmdb --application-name harvester Check the local upgrade responder is running curl -X POST http://localhost:8314/v1/checkupgrade \ -d &#39;{ &#34;appVersion&#34;: &#34;v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1849">#1849</a> [Task] Improve Harvester upgrade responder</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <p>Follow the steps in <a href="https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017">https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017</a></p> <ol> <li>Clone <a href="https://github.com/longhorn/upgrade-responder">longhorn/upgrade-responder</a> and checkout to <a href="https://github.com/longhorn/upgrade-responder/releases/tag/v0.1.4">v0.1.4</a>.</li> <li>Edit <a href="https://github.com/longhorn/upgrade-responder/blob/master/config/response.json">response.json</a> content in config folder</li> </ol> <pre tabindex="0"><code>{ &#34;Versions&#34;: [ { &#34;Name&#34;: &#34;v1.0.2-master-head&#34;, &#34;ReleaseDate&#34;: &#34;2022-06-15T00:00:00Z&#34;, &#34;Tags&#34;: [ &#34;latest&#34;, &#34;test&#34;, &#34;dev&#34; ] } ] } </code></pr Shut down host in maintenance mode and verify label change https://harvester.github.io/tests/manual/hosts/1272-shutdown-host-in-maintenance-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1272-shutdown-host-in-maintenance-mode/ - Related issues: #1272 Shut down a node with maintenance mode should show red label Verification Steps Open host page Set a node to maintenance mode Turn off host vm of the node Check node status Turn on host Check node status Expected Results The node should go into maintenance mode The node label should go red When turned on the node status should go back to yellow + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1272">#1272</a> Shut down a node with maintenance mode should show red label</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open host page</li> <li>Set a node to maintenance mode</li> <li>Turn off host vm of the node</li> <li>Check node status</li> <li>Turn on host</li> <li>Check node status</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The node should go into maintenance mode</li> <li>The node label should go red</li> <li>When turned on the node status should go back to yellow</li> </ol> Shut down host then delete hosted VM https://harvester.github.io/tests/manual/hosts/delete_vm_after_host_shutdown/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete_vm_after_host_shutdown/ - Ref: N/A, legacy test case Criteria VM should created and started successfully Node should be unavailable after shutdown VM should able to be deleted Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Power off the node hosting vm1 the node should becomes unavailable on dashboard Delete vm1, vm1 should be deleted successfully + <p>Ref: N/A, legacy test case</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable after shutdown</li> <li><input checked="" disabled="" type="checkbox"> VM should able to be deleted</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Power off the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li>Delete <code>vm1</code>, <code>vm1</code> should be deleted successfully</li> </ol> SSL Certificate https://harvester.github.io/tests/manual/advanced/ssl-certificate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/ssl-certificate/ - Ref: https://github.com/harvester/harvester/issues/761 Verify Items generated kubeconfig is able to access kubenetes API new node able to join the cluster using the configured Domain Name create node with ssl-certificates settings is working as expected. Case: Kubeconfig Install Harvester with at least 2 nodes Generate self-signed TLS certificates from https://www.selfsignedcertificate.com/ with specific name Navigate to advanced settings, edit ssl-certificates settings Update generated .cert file to CA and Public Certificate, .key file to Private Key Relogin with domain name Navigate to Support page, then Click Download KubeConfig, file should named local. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/761">https://github.com/harvester/harvester/issues/761</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>generated kubeconfig is able to access kubenetes API</li> <li>new node able to join the cluster using the configured Domain Name</li> <li>create node with ssl-certificates settings is working as expected.</li> </ul> <h3 id="case-kubeconfig">Case: Kubeconfig</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Generate self-signed TLS certificates from <a href="https://www.selfsignedcertificate.com/">https://www.selfsignedcertificate.com/</a> with specific name</li> <li>Navigate to advanced settings, edit <code>ssl-certificates</code> settings</li> <li>Update generated <code>.cert</code> file to <em>CA</em> and <em>Public Certificate</em>, <code>.key</code> file to <em>Private Key</em></li> <li>Relogin with domain name</li> <li>Navigate to Support page, then Click <strong>Download KubeConfig</strong>, file should named <code>local.yaml</code></li> <li>Kubernetes API should able to be accessed with config <code>local.yaml</code> (follow one of the <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/">instruction</a> for testing)</li> </ol> <h3 id="case-host-joining-with-https-and-domain-name">Case: Host joining with https and Domain Name</h3> <ol> <li>Install Harvester with single node</li> <li>Generate self-signed TLS certificates from <a href="https://www.selfsignedcertificate.com/">https://www.selfsignedcertificate.com/</a> with specific name</li> <li>Navigate to advanced settings, edit <code>ssl-certificates</code> settings</li> <li>Update generated <code>.cert</code> file to <em>CA</em> and <em>Public Certificate</em>, <code>.key</code> file to <em>Private Key</em></li> <li>Install another Harvester Host as a joining node via PXE installation <ul> <li>the <code>server_url</code> MUST be configured as the specific domain name</li> <li>Be aware set <code>os.dns_nameservers</code> to make sure the domain name is reachable.</li> </ul> </li> <li>The joining node should joined to the cluster successfully.</li> </ol> <h3 id="case-host-creating-with-ssl-certificates">Case: Host creating with SSL certificates</h3> <ol> <li>Install Harvester with single node via PXE installation <ul> <li>fill in <code>system_settings.ssl-certificates</code> as the format in <a href="https://github.com/harvester/harvester/issues/761#issuecomment-993060101">https://github.com/harvester/harvester/issues/761#issuecomment-993060101</a></li> </ul> </li> <li>Dashboard should able to be accessed via VIP and domain name</li> </ol> Start Host in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-start-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-start-host/ - For Host that is in maintenance mode and turned off Start host Expected Results Host should turn on Maintenance mode label in hosts list should go from red to yellow Known bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>For Host that is in maintenance mode and turned off</li> <li>Start host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should turn on</li> <li>Maintenance mode label in hosts list should go from red to yellow</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> Start VM and stop node Negative https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm-and-stop-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm-and-stop-node/ - Start the VM In a multi-node setup disconnect/shutdown the node where the VM is running Expected Results You should not be able to start the VM + <ol> <li>Start the VM</li> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to start the VM</li> </ol> Start VM Negative (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Start the VM Expected Results You should not be able to start the VM + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Start the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to start the VM</li> </ol> Stop VM Negative (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/negative-stop-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-stop-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Stop the VM Expected Results The VM list should quickly update to not running, or some other error state + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Stop the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM list should quickly update to not running, or some other error state</li> </ol> Support configuring a VLAN at the management interface in installer config https://harvester.github.io/tests/manual/_incoming/1390_support_configuring_a_vlan_at_the_management_interface_in_installer_config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1390_support_configuring_a_vlan_at_the_management_interface_in_installer_config/ - Ref: https://github.com/harvester/harvester/issues/1390, https://github.com/harvester/harvester/issues/1647 Verify Steps: Install Harvester with any nodes from PXE Boot with configurd vlan with vlan_id Harvester should installed successfully Login to console, execute ip a s dev mgmt-br.&lt;vlan_id&gt; should have IP and accessible Dashboard should be accessible + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1390">https://github.com/harvester/harvester/issues/1390</a>, <a href="https://github.com/harvester/harvester/issues/1647">https://github.com/harvester/harvester/issues/1647</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192803102-5062546d-ec36-4ecc-a1f3-4e6ec6c7a620.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes from PXE Boot with configurd vlan with <code>vlan_id</code></li> <li>Harvester should installed successfully</li> <li>Login to console, execute <code>ip a s dev mgmt-br.&lt;vlan_id&gt;</code> should have IP and accessible</li> <li>Dashboard should be accessible</li> </ol> Support multiple VLAN physical interfaces https://harvester.github.io/tests/manual/_incoming/2259-multiple-vlan-physical-interfaces/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2259-multiple-vlan-physical-interfaces/ - Related issues: #2259 [FEATURE] Support multiple VLAN physical interfaces Category: Network Verification Steps Create cluster network cn1 Create a vlanconfig config-n1 on cn1 which applied to node 1 only Select an available NIC on the Uplink Create a vlan, the cluster network cn1 vlanconfig and provide valid vlan id 91 Create cluster network cn2 Create a vlanconfig config-n2 on cn2 which applied to node 2 only Select an available NIC on the Uplink Create a vlan, the cluster network cn2 vlanconfig and provide valid vlan id 92 Create cluster network cn3 Create a vlanconfig config-n3 on cn3 which applied to node 3 only Select an available NIC on the Uplink Create a vlan, select the cluster network cn3 vlanconfig and provide valid vlan id 93 Create a VM, use the vlan id 1 and specific at any node Create a VM, use the vlan id 91 and specified at node1 Create another VM, use the vlan id 92 Expected Results Can create different vlan on each cluster network Can create VM using vlan id 91 and retrieve IP address correctly Can create VM using vlan id 92 and retrieve IP address correctly Can create VM using vlan id 1 and retrieve IP address correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2259">#2259</a> [FEATURE] Support multiple VLAN physical interfaces</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create cluster network <code>cn1</code> <img src="https://user-images.githubusercontent.com/29251855/196580297-57541544-48f5-4492-b3e9-a3450697f490.png" alt="image"></p> </li> <li> <p>Create a vlanconfig <code>config-n1</code> on <code>cn1</code> which applied to node 1 only <img src="https://user-images.githubusercontent.com/29251855/196580491-0572c539-5828-4f2e-a0a6-59b40fcc549b.png" alt="image"></p> </li> <li> <p>Select an available NIC on the Uplink <img src="https://user-images.githubusercontent.com/29251855/196580574-d38d59de-251c-4cf8-885d-655b76a78659.png" alt="image"></p> </li> <li> <p>Create a vlan, the cluster network <code>cn1</code> vlanconfig and provide valid vlan id <code>91</code> <img src="https://user-images.githubusercontent.com/29251855/196584602-b663ca69-da9a-42e3-94e0-41e094ff1d0b.png" alt="image"></p> Support private registry for Rancher agent image in Air-gap https://harvester.github.io/tests/manual/_incoming/2176-airgap-private-registry-rancher-agent-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2176-airgap-private-registry-rancher-agent-image/ - Related issues: #2176 [Enhancement] Air-gap operation: Support using a private registry for Rancher agent image Category: Rancher Integration Verification Steps Environment Setup Use vagrant-pxe-harvester to create a harvester cluster. Create another VM myregistry and set it in the same virtual network. In myregistry VM: Install docker. Run following commands: mkdir auth docker run \ --entrypoint htpasswd \ httpd:2 -Bbn testuser testpassword &gt; auth/htpasswd mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2176">#2176</a> [Enhancement] Air-gap operation: Support using a private registry for Rancher agent image</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Use vagrant-pxe-harvester to create a harvester cluster.</li> <li>Create another VM <code>myregistry</code> and set it in the same virtual network.</li> <li>In <code>myregistry</code> VM: <ul> <li>Install docker.</li> <li>Run following commands:</li> </ul> <pre tabindex="0"><code>mkdir auth docker run \ --entrypoint htpasswd \ httpd:2 -Bbn testuser testpassword &gt; auth/htpasswd mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -addext &#34;subjectAltName = DNS:myregistry.local&#34; \ -x509 -days 365 -out certs/domain.crt sudo mkdir -p /etc/docker/certs.d/myregistry.local:5000 sudo cp certs/domain.crt /etc/docker/certs.d/myregistry.local:5000/domain.crt docker run -d \ -p 5000:5000 \ --restart=always \ --name registry \ -v &#34;$(pwd)&#34;/certs:/certs \ -v &#34;$(pwd)&#34;/registry:/var/lib/registry \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -v &#34;$(pwd)&#34;/auth:/auth \ -e &#34;REGISTRY_AUTH=htpasswd&#34; \ -e &#34;REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm&#34; \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ registry:2 </code></pr Support Volume Clone https://harvester.github.io/tests/manual/_incoming/2293_support_volume_clone/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2293_support_volume_clone/ - Ref: https://github.com/harvester/harvester/issues/2293 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Navigate to Volumes, clone disk-0 and disk-1 which attached to vm1 by clicking Clone Volume Create vm2 with cloned disk-0 and disk-1 vm2 should started successfully Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2293">https://github.com/harvester/harvester/issues/2293</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Navigate to <em>Volumes</em>, clone <em>disk-0</em> and <em>disk-1</em> which attached to <code>vm1</code> by clicking <code>Clone Volume</code></li> <li>Create <code>vm2</code> with cloned <em>disk-0</em> and <em>disk-1</em></li> <li><code>vm2</code> should started successfully</li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a vdb/test</code></li> </ul> </li> <li>Navigate to Volumes, then clone <code>disk-1</code> of <strong>vm1</strong> into <strong>vm1-disk-2</strong></li> <li>Navigate to Virtual Machines, then update <code>vm1</code> to add existing volume <code>vm1-disk-2</code></li> <li>Login to <code>vm1</code> then mount <code>/dev/vdb1</code>(disk-1) and <code>/dev/vdc1</code>(disk-2) into <em>vdb</em> and <em>vdc</em></li> <li>test file should be appeared in both folders of <em>vdb</em> and <em>vdc</em></li> <li>test file should not be empty in both folders of <em>vdb</em> and <em>vdc</em></li> </ol> Support volume hot plug live migrate https://harvester.github.io/tests/manual/live-migration/support-volume-hot-unplug-live-migrate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/support-volume-hot-unplug-live-migrate/ - Related issues: #1401 Support volume hot-unplug Category: Storage Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Verification Steps Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged. Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click Detach volume Add volume again Migrate VM from one node to another Detach volume Add unplugged volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Support volume hot-unplug</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h5 id="scenario2-live-migrate-vm-not-have-hot-plugged-volume-before-do-hot-plugged-the-unplugged">Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged.</h5> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click Detach volume</li> <li>Add volume again</li> <li>Migrate VM from one node to another</li> <li>Detach volume</li> <li>Add unplugged volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Support Volume Hot Unplug (e2e_fe) https://harvester.github.io/tests/manual/volumes/support-volume-hot-unplug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/support-volume-hot-unplug/ - Related issues: #1401 Support volume hot-unplug Category: Storage Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it Verification Steps Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click de-attach volume Add volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Support volume hot-unplug</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h5 id="scenario1-live-migrate-vm-already-have-hot-plugged-volume-to-new-node-then-detach-hot-unplug-it">Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it</h5> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click de-attach volume</li> <li>Add volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Support Volume Snapshot https://harvester.github.io/tests/manual/_incoming/2294_support_volume_snapshot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2294_support_volume_snapshot/ - Ref: https://github.com/harvester/harvester/issues/2294 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127.0.0.1 | tee -a vdb/test Navigate to Volumes, then click Take Snapshot button on disk-1 of vm1 into vm1-disk-2 Navigate to Virtual Machines, then update vm1 to add existing volume vm1-disk-2 Login to vm1 then mount /dev/vdb1(disk-1) and /dev/vdc1(disk-2) into vdb and vdc test file should be appeared in both folders of vdb and vdc test file should not be empty in both folders of vdb and vdc + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2294">https://github.com/harvester/harvester/issues/2294</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a vdb/test</code></li> </ul> </li> <li>Navigate to Volumes, then click <strong>Take Snapshot</strong> button on <code>disk-1</code> of <strong>vm1</strong> into <strong>vm1-disk-2</strong></li> <li>Navigate to Virtual Machines, then update <code>vm1</code> to add existing volume <code>vm1-disk-2</code></li> <li>Login to <code>vm1</code> then mount <code>/dev/vdb1</code>(disk-1) and <code>/dev/vdc1</code>(disk-2) into <em>vdb</em> and <em>vdc</em></li> <li>test file should be appeared in both folders of <em>vdb</em> and <em>vdc</em></li> <li>test file should not be empty in both folders of <em>vdb</em> and <em>vdc</em></li> </ol> Switch the vlan interface of harvester node https://harvester.github.io/tests/manual/network/switch-the-vlan-interface-of-harvester-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/switch-the-vlan-interface-of-harvester-node/ - Related issues: #1464 VM pods turn to the terminating state after switching the VLAN interface Category: Network Verification Steps User ipxe-example to build up 3 nodes harvester Login harvester dashboard -&gt; Access Settings Enable vlan network with harvester-mgmt NIC interface Create a VM using harvester-mgmt Disable vlan network Enable vlan network and select bond0 interface Check host and vm is working Directly switch network interface from bond0 to harvester-mgmt without disable it. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1464">#1464</a> VM pods turn to the terminating state after switching the VLAN interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>User ipxe-example to build up 3 nodes harvester</li> <li>Login harvester dashboard -&gt; Access Settings</li> <li>Enable vlan network with <code>harvester-mgmt</code> NIC interface</li> <li>Create a VM using <code>harvester-mgmt</code></li> <li>Disable vlan network</li> <li>Enable vlan network and select <code>bond0</code> interface <img src="https://user-images.githubusercontent.com/29251855/144204800-ed20ab79-0c18-4a70-b258-2468d62e072d.png" alt="image"></li> <li>Check host and vm is working</li> <li>Directly switch network interface from <code>bond0</code> to <code>harvester-mgmt</code> without disable it. <img src="https://user-images.githubusercontent.com/29251855/144206080-cbba3e29-b125-422a-b629-9a412a218feb.png" alt="image"></li> <li>Check host and vm is working</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Switch the VLAN interface of this host can&rsquo;t affect Host and VM operation.</li> <li>All harvester node keep in <code>running</code> status</li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/144206164-092272aa-0488-40f4-bb3d-4a1aea5fdb5d.png" alt="image"></p> Sync harvester node's topology labels to rke2 guest-cluster's node https://harvester.github.io/tests/manual/_incoming/1418-sync-topology-labels-to-rke2-guest-cluster-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1418-sync-topology-labels-to-rke2-guest-cluster-node/ - Related issues: #1418 Support topology aware scheduling of guest cluster workloads Verification Steps Add topology labels(topology.kubernetes.io/region, topology.kubernetes.io/zone) to the Harvester node: In Harvester UI, select Hosts page. Click hosts&rsquo; Edit Config. Select Labels page, click Add Labels. Fill in, eg, Key: topology.kubernetes.io/zone, Value: zone1. Create harvester guest-cluster from rancher-UI. Wait for the guest-cluster to be created successfully and check if the guest-cluster node labels are consistent with the harvester nodes. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1418">#1418</a> Support topology aware scheduling of guest cluster workloads</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Add <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesioregion">topology labels</a>(<code>topology.kubernetes.io/region</code>, <code>topology.kubernetes.io/zone</code>) to the Harvester node:</p> <ul> <li>In Harvester UI, select <code>Hosts</code> page.</li> <li>Click hosts&rsquo; <code>Edit Config</code>.</li> <li>Select <code>Labels</code> page, click <code>Add Labels</code>.</li> <li>Fill in, eg, Key: <code>topology.kubernetes.io/zone</code>, Value: <code>zone1</code>.</li> </ul> </li> <li> <p>Create harvester guest-cluster from rancher-UI.</p> </li> <li> <p>Wait for the guest-cluster to be created successfully and check if the guest-cluster node labels are consistent with the harvester nodes.</p> Sync image display name to image labels https://harvester.github.io/tests/manual/_incoming/2630-sync-image-display-name-to-image-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2630-sync-image-display-name-to-image-labels/ - Related issues: #2630 [FEATURE] Sync image display_name to image labels Category: Image Verification Steps Login harvester dashboard Access the Preference page Enable developer tool Create an ubuntu focal image from url https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img View API of the created image Check can found the display name in the image API content Create the same ubuntu focal image from previous url again which would bring the same display name Check would be denied with error message Create a different ubuntu focal image with the same display name Expected Results In image API content, label harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2630">#2630</a> [FEATURE] Sync image display_name to image labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Access the Preference page</li> <li>Enable developer tool <img src="https://user-images.githubusercontent.com/29251855/187353113-495af11e-a3e5-4f8e-b03b-174b4f0660ea.png" alt="image"></li> <li>Create an ubuntu focal image from url <a href="https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img</a> <img src="https://user-images.githubusercontent.com/29251855/187353177-52516c6d-8e68-4ac5-8b40-4006f6460773.png" alt="image"></li> <li>View API of the created image <img src="https://user-images.githubusercontent.com/29251855/187353338-1f0691f3-b19a-4382-a26f-ab5897842474.png" alt="image"></li> <li>Check can found the display name in the image API content</li> <li>Create the same ubuntu focal image from previous url again which would bring the same display name</li> <li>Check would be denied with error message</li> <li>Create a different ubuntu focal image with the same display name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>In image API content, label <code>harvesterhci.io/imageDisplayName</code> added to labels, and it&rsquo;s value should be the displayName value <img src="https://user-images.githubusercontent.com/29251855/187353496-39c20027-f438-43de-a212-4f38b2dfbbae.png" alt="image"></li> <li>Image with the same display name in label would be denied by admission webhook &ldquo;validator.harvesterhci.io&rdquo; <img src="https://user-images.githubusercontent.com/29251855/187354352-ea2f08f3-01a1-4088-899b-d92e25433781.png" alt="image"></li> <li>Image with the same display name but different url would also be denied <img src="https://user-images.githubusercontent.com/29251855/187355241-845b09b5-953b-4e90-9948-ca8b025a6f5d.png" alt="image"></li> </ol> Take host out of maintenance mode that has been rebooted (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-rebooted/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-rebooted/ - For host in maintenance mode that has been rebooted take host out of maintenance mode Expected Results Host should go to Active Label shbould go green + <ol> <li>For host in maintenance mode that has been rebooted take host out of maintenance mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should go to Active</li> <li>Label shbould go green</li> </ol> Take host out of maintenance mode that has not been rebooted (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-not-rebooted/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-not-rebooted/ - For host in maintenance mode that has not been rebooted take host out of maintenance mode Expected Results Host should go to Active Label shbould go green + <ol> <li>For host in maintenance mode that has not been rebooted take host out of maintenance mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should go to Active</li> <li>Label shbould go green</li> </ol> Target Harvester by setting the variable kubeconfig with your kubeconfig file in the provider.tf file (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-variasble/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-variasble/ - Define the kubeconfig variable in the provider.tf file terraform { required_providers { harvester = { source = &#34;registry.terraform.io/harvester/harvester&#34; version = &#34;~&gt; 0.1.0&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;/path/of/my/kubeconfig&#34; } Check if you can interact with the Harvester by creating resource like a SSH key Execute the terraform apply command Expected Results The resource should be created Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Check if you can see your resource in the Harvester WebUI + <ol> <li>Define the kubeconfig variable in the provider.tf file</li> </ol> <pre tabindex="0"><code>terraform { required_providers { harvester = { source = &#34;registry.terraform.io/harvester/harvester&#34; version = &#34;~&gt; 0.1.0&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;/path/of/my/kubeconfig&#34; } </code></pre><ol> <li>Check if you can interact with the Harvester by creating resource like a SSH key</li> <li>Execute the <code>terraform apply</code> command</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource should be created <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>Check if you can see your resource in the Harvester WebUI</li> </ol> Target Harvester with the default kubeconfig located in $HOME/.kube/config (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-home/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-home/ - Make sure the kubeconfig is defined in the file $HOME/.kube/config Check if you can interact with the Harvester by creating resource like a SSH key Execute the terraform apply command Expected Results The resource should be created Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Check if you can see your resource in the Harvester WebUI + <ol> <li>Make sure the kubeconfig is defined in the file <code>$HOME/.kube/config</code></li> <li>Check if you can interact with the Harvester by creating resource like a SSH key</li> <li>Execute the <code>terraform apply</code> command</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource should be created <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>Check if you can see your resource in the Harvester WebUI</li> </ol> template with EFI (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2577-template-with-efi/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2577-template-with-efi/ - Related issues: #2577 [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected. Category Template Verification Steps Go to Template, create a VM template with Boot in EFI mode selected. Go to Virtual Machines, click Create, select Multiple instance, type in a random name prefix, and select the VM template we just created. Go to Advanced Options, for now this EFI checkbox should be checked without any issue. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2577">#2577</a> [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected.</li> </ul> <h2 id="category">Category</h2> <ul> <li>Template</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to Template, create a VM template with Boot in EFI mode selected. <img src="https://user-images.githubusercontent.com/9990804/181196319-d95a4d23-ea31-418c-9fd2-152821d56930.png" alt="image"></li> <li>Go to Virtual Machines, click Create, select Multiple instance, type in a random name prefix, and select the VM template we just created. <img src="image.png" alt="image"></li> <li>Go to Advanced Options, for now this EFI checkbox should be checked without any issue. <img src="https://user-images.githubusercontent.com/9990804/181196934-1249902f-47dd-44dc-bced-5911ffcfdf16.png" alt="image"></li> <li>Create a VM with template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check VM setting, the booting in EFI mode is checked <img src="https://user-images.githubusercontent.com/29251855/182343254-4a421a04-aa3f-471c-a258-930a98cc84d3.png" alt="image"></li> <li>Verify that VM is running with UEFI using</li> </ol> <pre tabindex="0"><code>ubuntu@efi-01:~$ ls /sys/firmware/ acpi dmi efi memmap </code></pr Temporary network disruption https://harvester.github.io/tests/manual/hosts/negative-network-disruption/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-network-disruption/ - Create a vms on the cluster. Disable network of a node for sometime. e.g. 5 sec, 5 mins Expected Results VM should be accessible after the network is up. + <ol> <li>Create a vms on the cluster.</li> <li>Disable network of a node for sometime. e.g. 5 sec, 5 mins</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be accessible after the network is up.</li> </ol> Terraform import VLAN https://harvester.github.io/tests/manual/_incoming/2261-terraform-import-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2261-terraform-import-vlan/ - Related issues: #2261 [FEATURE] enhance terraform network to not pruge route_cidr and route_gateway Category: Terraform Verification Steps Install Harvester with any nodes Install terraform-harvester-provider (using master-head for testing) Execute terraform init Create the file network.tf as following snippets, then execute terraform import harvester_clusternetwork.vlan vlan to import default vlan settings resource &#34;harvester_clusternetwork&#34; &#34;vlan&#34; { name = &#34;vlan&#34; enable = true default_physical_nic = &#34;harvester-mgmt&#34; } resource &#34;harvester_network&#34; &#34;vlan1&#34; { name = &#34;vlan1&#34; namespace = &#34;harvester-public&#34; vlan_id = 1 route_mode = &#34;auto&#34; } execute terraform apply Login to dashboard then navigate to Advanced/Networks, make sure the Route Connectivity becomes Active Execute terraform apply again and many more times Expected Results Resources should not be changed or added or destroyed. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2261">#2261</a> [FEATURE] enhance terraform network to not pruge route_cidr and route_gateway</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraform</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Install terraform-harvester-provider (using master-head for testing)</li> <li>Execute <code>terraform init</code></li> <li>Create the file network.tf as following snippets, then execute <code>terraform import harvester_clusternetwork.vlan vlan</code> to import default vlan settings</li> </ol> <pre tabindex="0"><code>resource &#34;harvester_clusternetwork&#34; &#34;vlan&#34; { name = &#34;vlan&#34; enable = true default_physical_nic = &#34;harvester-mgmt&#34; } resource &#34;harvester_network&#34; &#34;vlan1&#34; { name = &#34;vlan1&#34; namespace = &#34;harvester-public&#34; vlan_id = 1 route_mode = &#34;auto&#34; } </code></pr Terraform Rancher2 Provider Testing https://harvester.github.io/tests/manual/harvester-rancher2-terraform-integration/terraform_rancher2_provider_testing/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher2-terraform-integration/terraform_rancher2_provider_testing/ - Ref: https://github.com/rancher/terraform-provider-rancher2/issues/1009 Test Information Environment Rancher: v2.7.X Environment for Harvester: bare-metal or qemu Harvester Version: v1.1.X ui-source Option: Auto Rancher2 Terraform Provider Plugin: v3.0.X rancher2 Test Setup Rancher2 Terraform Provider: make sure terraform is installed at version equal or greater than 1.3.9, ie: sudo apt install terraform utilize the setup-provider.sh script from the rancher2 terraform provider repo if testing an rc it would look something like ./setup-provider.sh rancher2 v3.0.0-rc1 ensure the provider is installed, can cross check the directory structures under ~/. + <p>Ref: <a href="https://github.com/rancher/terraform-provider-rancher2/issues/1009">https://github.com/rancher/terraform-provider-rancher2/issues/1009</a></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment Rancher: v2.7.X</li> <li>Environment for Harvester: bare-metal or qemu</li> <li>Harvester Version: v1.1.X</li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> <li>Rancher2 Terraform Provider Plugin: v3.0.X <a href="https://github.com/rancher/terraform-provider-rancher2/releases">rancher2</a></li> </ul> <h3 id="test-setup-rancher2-terraform-provider">Test Setup Rancher2 Terraform Provider:</h3> <ol> <li>make sure terraform is installed at version equal or greater than 1.3.9, ie: <code>sudo apt install terraform</code></li> <li>utilize the <a href="https://github.com/rancher/terraform-provider-rancher2/blob/master/setup-provider.sh">setup-provider.sh</a> script from the rancher2 terraform provider repo if testing an rc it would look something like <code>./setup-provider.sh rancher2 v3.0.0-rc1</code></li> <li>ensure the provider is installed, can cross check the directory structures under <code>~/.terraform.d/plugins/terraform.local</code></li> </ol> <h3 id="setup-rancher-v27x">Setup Rancher v2.7.X</h3> <ol> <li>build an API Key for Rancher utilizing <a href="https://ranchermanager.docs.rancher.com/reference-guides/user-settings/api-keys">this doc</a>, keeping reference of the: access-key, secret-key, &amp; bearer token</li> <li>import a harvester cluster into Rancher v2.7.X, keep reference of that Harvester cluster name</li> </ol> <h2 id="additional-setup">Additional Setup</h2> <ol> <li>build out a temporary directory to preform this deep integration testing</li> <li>create the following two folders of something like: <ul> <li><code>harvester-setup</code></li> <li><code>rancher-setup</code></li> </ul> </li> <li>inside each folder create a: <ul> <li><code>main.tf</code></li> <li><code>provider.tf</code></li> </ul> </li> </ol> <h3 id="harvester-setup">Harvester Setup</h3> <ol> <li>download the Harvester kubeconfig file into the <code>harvester-setup</code> folder</li> <li>inside the <code>harvester-setup</code> folder in the <code>provider.tf</code> file add:</li> </ol> <pre tabindex="0"><code>terraform { required_version = &#34;&gt;= 0.13&#34; required_providers { harvester = { source = &#34;harvester/harvester&#34; version = &#34;0.6.1&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;&lt;the kubeconfig file path of the harvester cluster&gt;&#34; } </code></pr Terraformer import KUBECONFIG https://harvester.github.io/tests/manual/_incoming/2604-terraformer-import-kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2604-terraformer-import-kubeconfig/ - Related issues: #2604 [BUG] Terraformer imported VLAN always be 0 Category: Terraformer Verification Steps Install Harvester with any nodes Login to dashboard, navigate to: Advanced/Settings -&gt; then enabledvlan` Navigate to Advanced/Networks and Create a Network which Vlan ID is not 0 Navigate to Support Page and Download KubeConfig file Initialize a terraform environment, download Harvester Terraformer Execute command terraformer import harvester -r network to generate terraform configuration from the cluster Generated file generated/harvester/network/network. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2604">#2604</a> [BUG] Terraformer imported VLAN always be 0</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraformer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to dashboard, navigate to: Advanced/Settings -&gt; then enabledvlan`</li> <li>Navigate to Advanced/Networks and Create a Network which Vlan ID is not 0</li> <li>Navigate to Support Page and Download KubeConfig file</li> <li>Initialize a terraform environment, download Harvester Terraformer</li> <li>Execute command <code>terraformer import harvester -r network</code> to generate terraform configuration from the cluster</li> <li>Generated file <code>generated/harvester/network/network.tf</code> should exists</li> <li>VLAN and other settings should match</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>vlan_id should be the same as the import cluster.</li> </ol> Test a deployment with ALL resources at the same time (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/deployment-all-resources/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/deployment-all-resources/ - Re-use the previous generated TF files and group them all either in one directory or in the same file Generates a speculative execution plan with terraform plan command Create the resources with terraform apply command Check that all resources are correctly created/running on the Harvester cluster Destroy the resources with the command terraform destroy Expected Results Refer to the harvester_ssh_key resource expected results + <ol> <li>Re-use the previous generated TF files and group them all either in one directory or in the same file</li> <li>Generates a speculative execution plan with terraform plan command</li> <li>Create the resources with terraform apply command</li> <li>Check that all resources are correctly created/running on the Harvester cluster</li> <li>Destroy the resources with the command terraform destroy</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Refer to the harvester_ssh_key resource expected results</p> Test aborting live migration https://harvester.github.io/tests/manual/live-migration/abort-live-migration/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/abort-live-migration/ - On a VM that is turned on select migrate Start the migration Abort the migration Expected Results You should see the status move to migrating You should see the status move to aborting migration You should see the status move to running The VM should pass health checks + <ol> <li>On a VM that is turned on select migrate</li> <li>Start the migration</li> <li>Abort the migration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should see the status move to migrating</li> <li>You should see the status move to aborting migration</li> <li>You should see the status move to running</li> <li>The VM should pass health checks</li> </ol> Test NTP server timesync https://harvester.github.io/tests/manual/hosts/1535-test-ntp-timesync/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1535-test-ntp-timesync/ - Related issues: #1535 NTP daemon in host OS Environment setup This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install Verification Steps SSH into nodes and verify times are close Verify NTP is active with sudo timedatectl status Expected Results Times should be within a minute of each other NTP should show as active + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1535">#1535</a> NTP daemon in host OS</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH into nodes and verify times are close</li> <li>Verify NTP is active with <code>sudo timedatectl status</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Times should be within a minute of each other</li> <li>NTP should show as active</li> </ol> Test NTP server timesync https://harvester.github.io/tests/manual/misc/1535-test-ntp-timesync/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1535-test-ntp-timesync/ - Related issues: #1535 NTP daemon in host OS Environment setup This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install Verification Steps SSH into nodes and verify times are close Verify NTP is active with sudo timedatectl status Expected Results Times should be within a minute of each other NTP should show as active + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1535">#1535</a> NTP daemon in host OS</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH into nodes and verify times are close</li> <li>Verify NTP is active with <code>sudo timedatectl status</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Times should be within a minute of each other</li> <li>NTP should show as active</li> </ol> Test the harvester_clusternetwork resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-clusternetwork-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-clusternetwork-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_image resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-image-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-image-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_network resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-network-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-network-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_ssh_key resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-ssh-key-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-ssh-key-resource/ - These following steps must be done for every resources, for avoiding repetitions, look at the detailed instructions at the beginning of the page. Import a resource Generates a speculative execution plan with terraform plan command Create the resource with terraform apply command Use terraform plan again Use terraform apply again Destroy the resource with the command terraform destroy Expected Results The resource is well imported in the terraform.tfstate file and you can print it with the terraform show command The command should display the difference between the actual status and the configured status Plan: 1 to add, 0 to change, 0 to destroy. + <p>These following steps must be done for every resources, for avoiding repetitions, look at the detailed instructions at the beginning of the page.</p> <ol> <li>Import a resource</li> <li>Generates a speculative execution plan with terraform plan command</li> <li>Create the resource with terraform apply command</li> <li>Use terraform plan again</li> <li>Use terraform apply again</li> <li>Destroy the resource with the command terraform destroy</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource is well imported in the terraform.tfstate file and you can print it with the terraform show command</li> <li>The command should display the difference between the actual status and the configured status <code>Plan: 1 to add, 0 to change, 0 to destroy.</code> <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>You must see the new resource(s) on the Harvester dashboard`</li> <li><code>No changes. Your infrastructure matches the configuration.</code></li> <li><code>Apply complete! Resources: 0 added, 0 changed, 0 destroyed.</code></li> <li><code>Destroy complete! Resources: 1 destroyed.</code></li> </ol> Test the harvester_virtualmachine resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-virtualmachine-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-virtualmachine-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_volume resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-volume-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-volume-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test zero downtime for live migration download test https://harvester.github.io/tests/manual/live-migration/zero-downtime-download-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/zero-downtime-download-test/ - Connect to VM via console Start a large file download Live migrate VM to new host Verify that file download does not fail Expected Results Console should open VM should start to migrate File download should + <ol> <li>Connect to VM via console</li> <li>Start a large file download</li> <li>Live migrate VM to new host</li> <li>Verify that file download does not fail</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Console should open</li> <li>VM should start to migrate</li> <li>File download should</li> </ol> Test zero downtime for live migration ping test https://harvester.github.io/tests/manual/live-migration/zero-downtime-ping-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/zero-downtime-ping-test/ - Continually ping VM Verify that ping is getting a response Live migrate VM to new host Verify that ping continues Expected Results Ping should get response VM should start to migrate Ping should not get any dropped packets + <ol> <li>Continually ping VM</li> <li>Verify that ping is getting a response</li> <li>Live migrate VM to new host</li> <li>Verify that ping continues</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Ping should get response</li> <li>VM should start to migrate</li> <li>Ping should not get any dropped packets</li> </ol> Testing Harvester Storage Tiering https://harvester.github.io/tests/manual/_incoming/2147-testing-storage-tiering/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2147-testing-storage-tiering/ - Related issues: #2147 [[FEATURE] Storage Tiering Category: Images Volumes VirtualMachines Test Setup Steps Have a Harvester Node with 3 Disks in total (one main disk, two additional disks), ideally the two additional disks should be roughly 20/30Gi for testing Add the additional disks to the harvester node (you may first need to be on the node itself and do a sudo gdisk /dev/sda and then w and y to write the disk identifier so that Harvester can recogonize the disk, note you shouldn&rsquo;t need to build partitions) Add the disks to the Harvester node via: Hosts -&gt; Edit Config -&gt; Storage -&gt; &ldquo;Add Disk&rdquo; (call-to-action), they should auto populate with available disks that you can add Save Navigate back to Hosts -&gt; Host -&gt; Edit Config -&gt; Storage, then add a Host Tag, and a unique disk tag for every disk (including the main disk/default-disk) Verification Steps with Checks Navigate to Advanced -&gt; Storage Classes -&gt; Create (Call-To-Action), create a storageClass &ldquo;sc-a&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12) Also create a storageClass &ldquo;sc-b&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12) Create a new image img-a, specify storageClassName to sc-a Create a new vm vm1 use the image img-a Check the replicas number and location of rootdisk volume in longhorn UI Create a new volume volume-a by choose source=image img-a Add the volume volume-a to vm vm1 Check the replicas number and location of volume volume-a in longhorn UI: volume-a, should also be seen in kubectl get pv --all-namespaces (where &ldquo;Claim&rdquo; is volume-a) with the appropriate storage class also with something like kubectl describe pv/pvc-your-uuid-from-get-pv-call-with-volume-a --all-namespaces: can audit volume attributes like: VolumeAttributes: diskSelector=second migratable=true nodeSelector=node-2 numberOfReplicas=1 share=true staleReplicaTimeout=30 storage. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2147">#2147</a> [[FEATURE] Storage Tiering</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> <li>Volumes</li> <li>VirtualMachines</li> </ul> <h2 id="test-setup-steps">Test Setup Steps</h2> <ol> <li>Have a Harvester Node with 3 Disks in total (one main disk, two additional disks), ideally the two additional disks should be roughly 20/30Gi for testing</li> <li>Add the additional disks to the harvester node (you may first need to be on the node itself and do a <code>sudo gdisk /dev/sda</code> and then <code>w</code> and <code>y</code> to write the disk identifier so that Harvester can recogonize the disk, note you shouldn&rsquo;t need to build partitions)</li> <li>Add the disks to the Harvester node via: Hosts -&gt; Edit Config -&gt; Storage -&gt; &ldquo;Add Disk&rdquo; (call-to-action), they should auto populate with available disks that you can add</li> <li>Save</li> <li>Navigate back to Hosts -&gt; Host -&gt; Edit Config -&gt; Storage, then add a Host Tag, and a unique disk tag for every disk (including the main disk/default-disk)</li> </ol> <h2 id="verification-steps-with-checks">Verification Steps with Checks</h2> <ol> <li>Navigate to Advanced -&gt; Storage Classes -&gt; Create (Call-To-Action), create a storageClass &ldquo;sc-a&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12)</li> <li>Also create a storageClass &ldquo;sc-b&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12)</li> <li>Create a new image img-a, specify storageClassName to sc-a</li> <li>Create a new vm vm1 use the image img-a</li> <li>Check the replicas number and location of rootdisk volume in longhorn UI</li> <li>Create a new volume volume-a by choose source=image img-a</li> <li>Add the volume volume-a to vm vm1</li> <li>Check the replicas number and location of volume volume-a in longhorn UI: <ol> <li>volume-a, should also be seen in <code>kubectl get pv --all-namespaces</code> (where &ldquo;Claim&rdquo; is volume-a) with the appropriate storage class</li> <li>also with something like <code>kubectl describe pv/pvc-your-uuid-from-get-pv-call-with-volume-a --all-namespaces</code>: <ol> <li>can audit volume attributes like:</li> </ol> <pre tabindex="0"><code> VolumeAttributes: diskSelector=second migratable=true nodeSelector=node-2 numberOfReplicas=1 share=true staleReplicaTimeout=30 storage.kubernetes.io/csiProvisionerIdentity=1665780638152-8081-driver.longhorn.io </code></pr The count of volume snapshots should not include VM's snapshots https://harvester.github.io/tests/manual/_incoming/3004-volume-snaphost-not-include-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/3004-volume-snaphost-not-include-vm/ - Related issues: #3004 [BUG] The count of volume snapshots should not include VM&rsquo;s snapshots Category: Volume Verification Steps Create a VM vm1 Take a VM snapshot Check the volume snapshot page Check the VM snapshot page Expected Results When one VM is created Only VM snap are created The count of volume snapshots should not include VM&rsquo;s snapshots. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3004">#3004</a> [BUG] The count of volume snapshots should not include VM&rsquo;s snapshots</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM <code>vm1</code></li> <li>Take a VM snapshot</li> <li>Check the volume snapshot page</li> <li>Check the VM snapshot page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>When one VM is created <img src="https://user-images.githubusercontent.com/29251855/197482909-baf7d1f4-4032-4180-bb88-22aac8b9a8bc.png" alt="image"></p> </li> <li> <p>Only VM snap are created <img src="https://user-images.githubusercontent.com/29251855/197484294-46b89b29-78be-4d28-a33c-77aa525850a8.png" alt="image"></p> </li> <li> <p>The count of volume snapshots should not include VM&rsquo;s snapshots. <img src="https://user-images.githubusercontent.com/29251855/197484528-ed4c562b-782b-400e-99ec-fa97e292568d.png" alt="image"></p> Timeout option for support bundle https://harvester.github.io/tests/manual/advanced/support_bundle_timeout/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/support_bundle_timeout/ - Ref: https://github.com/harvester/harvester/issues/1585 Verify Items An Timeout Option can be configured for support bundle Error message will display when reach timeout Case: Generate support bundle but hit timeout Install Harvester with at least 2 nodes Navigate to Advanced Settings, modify support-bundle-timeout to 2 Navigate to Support, Click Generate Support Bundle, and force shut down one of the node in the mean time. 2 mins later, the function will failed with an Error message pop up as the snapshot + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1585">https://github.com/harvester/harvester/issues/1585</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>An <strong>Timeout</strong> Option can be configured for support bundle</li> <li>Error message will display when reach timeout</li> </ul> <h2 id="case-generate-support-bundle-but-hit-timeout">Case: Generate support bundle but hit timeout</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Advanced Settings, modify <code>support-bundle-timeout</code> to <code>2</code></li> <li>Navigate to Support, Click <strong>Generate Support Bundle</strong>, and force shut down one of the node in the mean time.</li> <li><strong>2</strong> mins later, the function will failed with an Error message pop up as the snapshot <img src="https://user-images.githubusercontent.com/5169694/145191630-27ef156c-d8dd-4480-811c-c1ce39142491.png" alt="image"></li> </ol> Topology aware scheduling of guest cluster workloads https://harvester.github.io/tests/manual/_incoming/1418-2383-topology-scheduling-of-guest-cluster-workloads/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1418-2383-topology-scheduling-of-guest-cluster-workloads/ - Related issues: #1418 [FEATURE] Support topology aware scheduling of guest cluster workloads Related issues: #2383 [backport v1.0.3] [FEATURE] Support topology aware scheduling of guest cluster workloads Category: Rancher integration Verification Steps Environment preparation as above steps Access Harvester node config page Add the following node labels with values topology.kubernetes.io/zone topology.kubernetes.io/region Provision an RKE2 cluster Wait for the provisioning complete Access RKE2 guest cluster Access the RKE2 cluster in Cluster Management page Click + to add another node Access the RKE2 cluster node page Wait until the second node created Edit yaml of the second node Check the harvester node label have propagated to the guest cluster node Expected Results The topology encoded in the Harvester cluster node labels Can be correctly propagated to the additional node of the RKE2 guest cluster + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1418">#1418</a> [FEATURE] Support topology aware scheduling of guest cluster workloads</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2383">#2383</a> [backport v1.0.3] [FEATURE] Support topology aware scheduling of guest cluster workloads</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Environment preparation as above steps</li> <li>Access Harvester node config page</li> <li>Add the following node labels with values <ul> <li>topology.kubernetes.io/zone</li> <li>topology.kubernetes.io/region</li> </ul> </li> <li>Provision an RKE2 cluster</li> <li>Wait for the provisioning complete</li> <li>Access RKE2 guest cluster</li> <li>Access the RKE2 cluster in Cluster Management page</li> <li>Click + to add another node <img src="https://user-images.githubusercontent.com/29251855/177774100-63c1a229-19d4-45f7-bd4e-8d2453c9149f.png" alt="image"></li> <li>Access the RKE2 cluster node page <img src="https://user-images.githubusercontent.com/29251855/177774234-ed001086-75a2-46e7-9638-0771cc790fad.png" alt="image"></li> <li>Wait until the second node created <img src="https://user-images.githubusercontent.com/29251855/177774368-0c8b6ac1-15f0-4a64-8945-85551dc85e4f.png" alt="image"></li> <li>Edit yaml of the second node</li> <li>Check the harvester node label have propagated to the guest cluster node <img src="https://user-images.githubusercontent.com/29251855/177774559-8f278b2d-fff0-48ec-a62f-ceb3a9da8cc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>The topology encoded in the Harvester cluster node labels <img src="https://user-images.githubusercontent.com/29251855/177771658-1e3a8336-61c7-459d-9d4f-19e626ce9f23.png" alt="image"></p> Try to add a network with no name (e2e_be) https://harvester.github.io/tests/manual/network/negative-add-network-no-name/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-add-network-no-name/ - Navigate to the networks page in harvester Click Create Don&rsquo;t add a name Add a VLAN ID Click Create Expected Results You should get an error that says you need to add a name + <ol> <li>Navigate to the networks page in harvester</li> <li>Click Create</li> <li>Don&rsquo;t add a name</li> <li>Add a VLAN ID</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error that says you need to add a name</li> </ol> Turn off host that is in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-turn-off-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-turn-off-host/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Shut down Host Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode host should shut down Maintenance mode label in hosts list should go red Known bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Shut down Host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> <li>host should shut down</li> <li>Maintenance mode label in hosts list should go red</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> UI enables option to display password on login page https://harvester.github.io/tests/manual/authentication/ui_password_show_btn/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/ui_password_show_btn/ - Ref: https://github.com/harvester/harvester/issues/1550 Verify Items Password field in login page can be toggle show/hide Case: Toggle of Password field install harvester with any nodes setup password logout then login with password toggled + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1550">https://github.com/harvester/harvester/issues/1550</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Password field in login page can be toggle show/hide</li> </ul> <h2 id="case-toggle-of-password-field">Case: Toggle of Password field</h2> <ol> <li>install harvester with any nodes</li> <li>setup password</li> <li>logout then login with password toggled</li> </ol> Unable to stop VM which in starting state https://harvester.github.io/tests/manual/_incoming/2263_unable_to_stop_vm_which_in_starting_state/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2263_unable_to_stop_vm_which_in_starting_state/ - Ref: https://github.com/harvester/harvester/issues/2263 Verify Steps: Install Harvester with any nodes Create an Windows iso image for VM creation Create the Windows VM by using the iso image When the VM in Starting state, Stop button should able to click and work as expected + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2263">https://github.com/harvester/harvester/issues/2263</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Windows iso image for VM creation</li> <li>Create the Windows VM by using the iso image</li> <li>When the VM in <strong>Starting</strong> state, <strong>Stop</strong> button should able to click and work as expected</li> </ol> Update image labels after deleting source VM https://harvester.github.io/tests/manual/virtual-machines/1602-update-labels-on-image-after-vm-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1602-update-labels-on-image-after-vm-delete/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; update image &ldquo;img-1&rdquo; labels Expected Results image &ldquo;img-1&rdquo; will be updated + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>update image &ldquo;img-1&rdquo; labels</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be updated</li> </ol> Update image labels after deleting source VM(e2e_fe) https://harvester.github.io/tests/manual/images/1602-update-labels-on-image-after-vm-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/1602-update-labels-on-image-after-vm-delete/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; update image &ldquo;img-1&rdquo; labels Expected Results image &ldquo;img-1&rdquo; will be updated + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>update image &ldquo;img-1&rdquo; labels</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be updated</li> </ol> Upgrade guest cluster kubernetes version can also update the cloud provider chart version https://harvester.github.io/tests/manual/_incoming/2546-upgrade-guest-k8s-version-upgrade-cloud-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2546-upgrade-guest-k8s-version-upgrade-cloud-provider/ - Related issues: #2546 [BUG] Harvester Cloud Provider is not able to deploy upgraded container after upgrading the cluster Category: Rancher integration Verification Steps Prepare the previous stable Rancher rc version and Harvester Update rke-metadata-config to {&quot;refresh-interval-minutes&quot;:&quot;1440&quot;,&quot;url&quot;:&quot;https://yufa-dev.s3.ap-east-1.amazonaws.com/data.json&quot;} in global settings Update the ui-dashboard-index to https://releases.rancher.com/dashboard/latest/index.html Set ui-offline-preferred to Remote Refresh web page (ctrl + r) Open Create RKE2 cluster page Check the show deprecated kubernetes patched versions Select v1.23.8+rke2r1 Finish the RKE2 cluster provision Check the current cloud provider version in workload page Edit RKE2 cluster, upgrade the kubernetes version to 1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2546">#2546</a> [BUG] Harvester Cloud Provider is not able to deploy upgraded container after upgrading the cluster</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Rancher rc version and Harvester</li> <li>Update <code>rke-metadata-config</code> to <code>{&quot;refresh-interval-minutes&quot;:&quot;1440&quot;,&quot;url&quot;:&quot;https://yufa-dev.s3.ap-east-1.amazonaws.com/data.json&quot;}</code> in global settings <img src="https://user-images.githubusercontent.com/29251855/180735267-939e92e3-7fd5-4659-8bc8-ab14c95161d8.png" alt="image"></li> <li>Update the ui-dashboard-index to <code>https://releases.rancher.com/dashboard/latest/index.html</code></li> <li>Set <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh web page (ctrl + r)</li> <li>Open Create RKE2 cluster page</li> <li>Check the <code>show deprecated kubernetes patched versions</code> <img src="https://user-images.githubusercontent.com/29251855/180736528-feaa9615-ccf9-482b-9354-c2c9a6a4b23b.png" alt="image"></li> <li>Select <code>v1.23.8+rke2r1</code></li> <li>Finish the RKE2 cluster provision <img src="https://user-images.githubusercontent.com/29251855/180738516-3f429bba-22ab-4476-bebf-0ac2f87935c3.png" alt="image"></li> <li>Check the current cloud provider version in workload page <img src="https://user-images.githubusercontent.com/29251855/180738877-56afcd55-e519-48d9-a8b8-3cbed91a1dfb.png" alt="image"></li> <li>Edit RKE2 cluster, upgrade the kubernetes version to <code>1.23.9-rc3+rke2r1</code> <img src="https://user-images.githubusercontent.com/29251855/180739231-e61ef680-5a9d-480b-9ac9-eda7839e17b6.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/180739331-611b05d4-0c5d-4835-9da0-8c05b9cca027.png" alt="image"></li> <li>Wait for update finish <img src="https://user-images.githubusercontent.com/29251855/180739876-dc409fa8-a9a6-406b-a614-085cea57121f.png" alt="image"></li> <li>The cloud provider is upgrading <img src="https://user-images.githubusercontent.com/29251855/180740637-5d1c6ce0-07ed-4a62-a364-f1b5e9fe473f.png" alt="image"></li> <li>delete the old cloud provider version pod (v0.1.3) <img src="https://user-images.githubusercontent.com/29251855/180740767-e6d5cdc2-c004-4c7a-8298-690775265002.png" alt="image"></li> <li>Wait for newer version cloud provider have been bumped <img src="https://user-images.githubusercontent.com/29251855/180740875-38fa0cc0-c13a-4e39-ba46-5e869eadf087.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/180740998-80e451e5-ad91-4111-8abe-f51395427b9c.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>After upgrading the existing RKE2 guest cluster kubernetes version from older <code>v1.23.8+rke2r1</code> to <code>1.23.9-rc3+rke2r1</code>. The Harvester cloud provider can successfully updated from <code>v0.1.3</code> to <code>v0.1.4</code></p> Upgrade Harvester from new cluster network design (after v1.1.0) https://harvester.github.io/tests/manual/upgrade/upgrade-from-new-network-design/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-from-new-network-design/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to available resource Virtual Machine environment setup Clone ipxe-example https://github. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <p>We can select VM or Bare machine network setup according to available resource</p> Upgrade Harvester from traditonal cluster network design (before v1.1.0) https://harvester.github.io/tests/manual/upgrade/upgrade-from-traditional-network-design/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-from-traditional-network-design/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to available resource + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <p>We can select VM or Bare machine network setup according to available resource</p> Upgrade Harvester in Fully Airgapped Environment https://harvester.github.io/tests/manual/upgrade/fully-airgapped-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/fully-airgapped-upgrade/ - Category: Upgrade Harvester Environment requirement Airgapped Network without internet connectivity Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to your available resource Virtual Machine environment setup Clone ipxe-example https://github. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Airgapped Network without internet connectivity</li> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <h4 id="we-can-select-vm-or-bare-machine-network-setup-according-to-your-available-resource">We can select VM or Bare machine network setup according to your available resource</h4> <h2 id="virtual-machine-environment-setup">Virtual Machine environment setup</h2> <ol> <li>Clone ipxe-example <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Switch to v1.0 or main branch</li> <li>Edit Vagrantfile, add a new network interface of <code>pxe_server.vm.network</code></li> <li>Set the <code>pxe_server.vm.network</code> bond to correct <code>libvirt</code> network</li> <li>Add two additional new network interface of <code>harvester_node.vm.network</code></li> <li>Edit the settings.yml, set <code>harvester_network_config.offline: true</code></li> <li>Use ipxe-example to provision a multi nodes Harvester cluster</li> <li>Run <code>varant ssh pxe_server</code> to access pxe server</li> <li>Edit the <code>dhcpd.conf</code>, let pxe server can create a vlan and assign IP to it</li> </ol> <h2 id="bare-machine-environment-setup">Bare machine environment setup</h2> <ol> <li>Ensure your switch router have setup VLAN network</li> <li>Setup the VLAN connectivity to your Router/Gateway device</li> <li>Disable internet connectivity on Router</li> <li>Provision a multi nodes Harvester cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>For <code>VLAN 1</code> testing, enable network on settings, select <code>harvester-mgmt</code></p> Upgrade Harvester on node that has bonded NICs for management interface https://harvester.github.io/tests/manual/_incoming/3045-upgrade-with-bonded-nic/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/3045-upgrade-with-bonded-nic/ - Related issues: #3045 [BUG] Harvester Upgrade 1.0.3 to 1.1.0 does not handle multiple SLAVE in BOND for management interface Category: Upgrade Environment Setup This is to be done on a Harvester cluster where the NICs were configured to be bonded on install for the management interface. This can be done in one of two ways. Single node virtualized environment Bare metal environment with at least two NICs (this should really be done on 10gig NICs, but can be done on gigabit) Both NICs should be on the same VLAN/network with the same subnet + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3045">#3045</a> [BUG] Harvester Upgrade 1.0.3 to 1.1.0 does not handle multiple SLAVE in BOND for management interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li> <p>This is to be done on a Harvester cluster where the NICs were configured to be bonded on install for the management interface. This can be done in one of two ways.</p> <ul> <li>Single node virtualized environment</li> <li>Bare metal environment with at least two NICs (this should really be done on 10gig NICs, but can be done on gigabit)</li> </ul> </li> <li> <p>Both NICs should be on the same VLAN/network with the same subnet</p> Upgrade Harvester with bonded NICs on network https://harvester.github.io/tests/manual/upgrade/bonded-nics-traditional-network-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/bonded-nics-traditional-network-upgrade/ - Related issues: #3047 [BUG] migrate_harv_mgmt_to_mgmt_br.sh should remove ClusterNetwork resource Category: Upgrade Harvester Environment setup from v1.0.3 upgrade to v1.1.1 Clone ipxe-example and switch to v1.0 branch Add three additional Network interface in Vagrantfile harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39;, mac: @settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][node_number][&#39;mac&#39;] harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; Edit the config-create.yaml.j2 and config-join.yaml.j2 in /ansible/role/harvester/template/ Add the cluster_network and defaultPysicalNIC to harvester-mgmt cluster_networks: vlan: enable: true description: &#34;some description about this cluster network&#34; config: defaultPhysicalNIC: harvester-mgmt Bond multiple NICs on harvester-mgmt and harvester-vlan networks networks: harvester-mgmt: interfaces: - name: {{ settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][0][&#39;mgmt_interface&#39;] }} # The management interface name - name: ens9 method: dhcp bond0: interfaces: - name: {{ settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][0][&#39;vagrant_interface&#39;] }} method: dhcp harvester-vlan: interfaces: - name: ens7 - name: ens8 method: none Verification Steps Provision previous version of Harvester cluster + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3047">#3047</a> [BUG] migrate_harv_mgmt_to_mgmt_br.sh should remove ClusterNetwork resource</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-setup-from-v103-upgrade-to-v111">Environment setup from v1.0.3 upgrade to v1.1.1</h2> <ol> <li>Clone ipxe-example and switch to <code>v1.0</code> branch</li> <li>Add three additional Network interface in <code>Vagrantfile</code> <pre tabindex="0"><code> harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39;, mac: @settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][node_number][&#39;mac&#39;] harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; </code></pr Upgrade Harvester with HDD Disks https://harvester.github.io/tests/manual/upgrade/upgrade-with-hdd-disk/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-with-hdd-disk/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Use HDD disk with SMR type or slow I/O speed n1-103:~ # smartctl -a /dev/sda smartctl 7.2 2021-09-14 r5237... === START OF INFORMATION SECTION === Model Family: Seagate BarraCuda 3. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Use HDD disk with SMR type or slow I/O speed <pre tabindex="0"><code>n1-103:~ # smartctl -a /dev/sda smartctl 7.2 2021-09-14 r5237... === START OF INFORMATION SECTION === Model Family: Seagate BarraCuda 3.5 (SMR) </code></pr Upgrade Harvester with IPv6 DHCP https://harvester.github.io/tests/manual/upgrade/ipv6-dhcp-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/ipv6-dhcp-upgrade/ - Related issues: #2962 [BUG] Host IP is inconsistent Category: Upgrade Harvester Environment setup Open the virtual machine manager Open the Connection Details -&gt; Virtual Networks Create a new virtual network workload Add the following XML content &lt;network&gt; &lt;name&gt;workload&lt;/name&gt; &lt;uuid&gt;ac62e6bf-6869-41a9-a2b7-25c06c7601c9&lt;/uuid&gt; &lt;forward mode=&#34;nat&#34;&gt; &lt;nat&gt; &lt;port start=&#34;1024&#34; end=&#34;65535&#34;/&gt; &lt;/nat&gt; &lt;/forward&gt; &lt;bridge name=&#34;virbr5&#34; stp=&#34;on&#34; delay=&#34;0&#34;/&gt; &lt;mac address=&#34;52:54:00:7b:ed:99&#34;/&gt; &lt;domain name=&#34;workload&#34;/&gt; &lt;ip address=&#34;192.168.101.1&#34; netmask=&#34;255.255.255.0&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;192.168.101.128&#34; end=&#34;192.168.101.254&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;ip family=&#34;ipv6&#34; address=&#34;fd7d:844d:3e17:f3ae::1&#34; prefix=&#34;64&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;fd7d:844d:3e17:f3ae::100&#34; end=&#34;fd7d:844d:3e17:f3ae::1ff&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;/network&gt; Change the bridge name to a new one + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2962">#2962</a> [BUG] Host IP is inconsistent</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li> <p>Open the virtual machine manager</p> </li> <li> <p>Open the Connection Details -&gt; Virtual Networks</p> </li> <li> <p>Create a new virtual network <code>workload</code></p> </li> <li> <p>Add the following XML content</p> <pre tabindex="0"><code>&lt;network&gt; &lt;name&gt;workload&lt;/name&gt; &lt;uuid&gt;ac62e6bf-6869-41a9-a2b7-25c06c7601c9&lt;/uuid&gt; &lt;forward mode=&#34;nat&#34;&gt; &lt;nat&gt; &lt;port start=&#34;1024&#34; end=&#34;65535&#34;/&gt; &lt;/nat&gt; &lt;/forward&gt; &lt;bridge name=&#34;virbr5&#34; stp=&#34;on&#34; delay=&#34;0&#34;/&gt; &lt;mac address=&#34;52:54:00:7b:ed:99&#34;/&gt; &lt;domain name=&#34;workload&#34;/&gt; &lt;ip address=&#34;192.168.101.1&#34; netmask=&#34;255.255.255.0&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;192.168.101.128&#34; end=&#34;192.168.101.254&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;ip family=&#34;ipv6&#34; address=&#34;fd7d:844d:3e17:f3ae::1&#34; prefix=&#34;64&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;fd7d:844d:3e17:f3ae::100&#34; end=&#34;fd7d:844d:3e17:f3ae::1ff&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;/network&gt; </code></pr Upgrade support of audit and event log https://harvester.github.io/tests/manual/_incoming/2750-support-audit-event-log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2750-support-audit-event-log/ - Related issues: #2750 [FEATURE] Upgrade support of audit and event log Category: Logging Audit Verification Steps Prepare v1.0.3 cluster, single-node and multi-node need to be tested separately Upgrade to v1.1.0-rc2 / master-head The upgrade should be successful, if not, check log and POD errors After upgrade, check following PODs and files, there should be no error Expected Results Check both Single and Multi nodes upgrade of the following: Check the following files and pods have no error + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2750">#2750</a> [FEATURE] Upgrade support of audit and event log</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Logging Audit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare v1.0.3 cluster, single-node and multi-node need to be tested separately</li> <li>Upgrade to v1.1.0-rc2 / master-head</li> <li>The upgrade should be successful, if not, check log and POD errors</li> <li>After upgrade, check following PODs and files, there should be no error</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Check both Single and Multi nodes upgrade of the following:</p> Upload Cloud Image (e2e_be) https://harvester.github.io/tests/manual/images/upload-cloud-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/upload-cloud-image/ - Upload image to images page Create new vm with image using appropriate template Run VM health checks Expected Results Image should upload Health checks should pass + <ol> <li>Upload image to images page</li> <li>Create new vm with image using appropriate template</li> <li>Run VM health checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should upload</li> <li>Health checks should pass</li> </ol> Upload image that is invalid https://harvester.github.io/tests/manual/images/negative-upload-invalid-image-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/negative-upload-invalid-image-file/ - steTry to upload invalid image file to images page Something like dmg, or tar.gzps Expected Results You should get an error Known Bugs https://github.com/harvester/harvester/issues/1425 + <ol> <li>steTry to upload invalid image file to images page <ul> <li>Something like dmg, or tar.gzps</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1425">https://github.com/harvester/harvester/issues/1425</a></p> Upload ISO Image(e2e_fe) https://harvester.github.io/tests/manual/images/upload-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/upload-iso-image/ - Upload image to images page Create new vm with image using appropriate template Run VM health checks Expected Results Image should upload Health checks should pass + <ol> <li>Upload image to images page</li> <li>Create new vm with image using appropriate template</li> <li>Run VM health checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should upload</li> <li>Health checks should pass</li> </ol> Use a non-admin user https://harvester.github.io/tests/manual/node-driver/non-admin-user/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/non-admin-user/ - create harvester user ltang, password ltang add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>create harvester user ltang, password ltang</li> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Validate network connectivity external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/validate-network-external-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/validate-network-external-vlan/ - Create a new VM Make sure that the network is set to the external VLAN with bridge as the type Ping VM Attempt to SSH to VM Expected Results VM should be created You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the external VLAN with bridge as the type</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Validate network connectivity invalid external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/negative-network-connectivity-invalid-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-network-connectivity-invalid-vlan/ - Create a new VM Make sure that the network is set to the external VLAN with bridge as the type and a VLAN ID that isn&rsquo;t valid for your network Ping VM Attempt to SSH to VM Expected Results VM should be created You should not be able to ping the VM from an external network You should not be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the external VLAN with bridge as the type and a VLAN ID that isn&rsquo;t valid for your network</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should not be able to ping the VM from an external network</li> <li>You should not be able to SSH to VM</li> </ol> Validate network connectivity management network (e2e_be) https://harvester.github.io/tests/manual/network/validate-network-management-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/validate-network-management-network/ - Create a new VM Make sure that the network is set to the management network with masquerade as the type Ping VM Attempt to SSH to VM Expected Results VM should be created You should not be able to ping the VM from an external network You should not be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the management network with masquerade as the type</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should not be able to ping the VM from an external network</li> <li>You should not be able to SSH to VM</li> </ol> Validate QEMU agent installation https://harvester.github.io/tests/manual/virtual-machines/1235-check-qemu-installation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1235-check-qemu-installation/ - Related issues: #1235 QEMU agent is not installed by default when creating VMs Verification Steps Creat openSUSE VM Start VM check for qemu-ga package Create Ubuntu VM Start VM Check for qemu-ga package Expected Results VMs should start Packages should be present + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1235">#1235</a> QEMU agent is not installed by default when creating VMs</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Creat openSUSE VM</li> <li>Start VM</li> <li>check for qemu-ga package</li> <li>Create Ubuntu VM</li> <li>Start VM</li> <li>Check for qemu-ga package</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VMs should start</li> <li>Packages should be present</li> </ol> Validate volume shows as in use when attached (e2e_be) https://harvester.github.io/tests/manual/volumes/validate-volume-shows-in-use-while-attached/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/validate-volume-shows-in-use-while-attached/ - Navigate to Volumes and check for a volume in use by a VM Verify that the state says In Use Expected Results State should show correctly + <ol> <li>Navigate to Volumes and check for a volume in use by a VM</li> <li>Verify that the state says In Use</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>State should show correctly</li> </ol> Verify "Add Node Pool" https://harvester.github.io/tests/manual/node-driver/verify-add-node-pool/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/verify-add-node-pool/ - Create a cluster of 3 nodes, One node with etcd, Control Plane, Worker, the other two with Worker The cluster is created successfully, use the command kubectl get node to view the node roles Expected Results The status of the created cluster shows active show the 3 created node status running in harvester&rsquo;s vm list the information displayed on rancher and harvester matches the template configuration Check that the node role is correct + <ol> <li>Create a cluster of 3 nodes, One node with etcd, Control Plane, Worker, the other two with Worker</li> <li>The cluster is created successfully, use the command <code>kubectl get node</code> to view the node roles</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>show the 3 created node status running in harvester&rsquo;s vm list</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Check that the node role is correct</li> </ol> Verify and Configure Networking Connection (e2e_be) https://harvester.github.io/tests/manual/deployment/verify-network-connection/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-network-connection/ - Provide the hostName Select management NIC bond Select the IPv4 (Automatic and Static) Expected Results This value of hostname should be overwritten by DHCP if DHCP supplies a hostname for the system. If DHCP doesn&rsquo;t offer a hostname and this value is empty, a random hostname will be generated. + <ol> <li>Provide the hostName</li> <li>Select management NIC bond</li> <li>Select the IPv4 (Automatic and Static)</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>This value of hostname should be overwritten by DHCP if DHCP supplies a hostname for the system. If DHCP doesn&rsquo;t offer a hostname and this value is empty, a random hostname will be generated.</p> Verify Configuring SSH keys https://harvester.github.io/tests/manual/deployment/verify-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-ssh/ - Provide SSH keys while installing the Harvester. Verify user is able to login the node using that ssh key. Expected Results User should be able to login to the node using that ssh key. + <ol> <li>Provide SSH keys while installing the Harvester.</li> <li>Verify user is able to login the node using that ssh key.</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>User should be able to login to the node using that ssh key.</p> Verify Configuring via HTTP URL https://harvester.github.io/tests/manual/deployment/verify-http-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-http-config/ - Provide the remote Harvester config, you can find an example of the config I&rsquo;m using in the deployment test plan description Expected Results Check that all values are taking into account If you are using my config file, check: the node must be off after the installation the nvme and kvm modules are loaded the file /etc/test.txt exists with the correct rights the systcl values the env variable test_env should exist dns configured in /etc/resolv. + <ol> <li>Provide the remote Harvester config, you can find an example of the config I&rsquo;m using in the deployment test plan description</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check that all values are taking into account <ul> <li>If you are using my config file, check:</li> <li>the node must be off after the installation</li> <li>the nvme and kvm modules are loaded</li> <li>the file /etc/test.txt exists with the correct rights</li> <li>the systcl values</li> <li>the env variable test_env should exist</li> <li>dns configured in /etc/resolv.conf </li> <li>ntp configured in /etc/systemd/timesyncd.conf</li> </ul> </li> <li>Check the config file here: /oem/harvester.config</li> </ol> Verify Enabling maintenance mode https://harvester.github.io/tests/manual/hosts/verify-enabling-maintenance-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-enabling-maintenance-mode/ - Navigate to the Hosts page and select the node Click Maintenance Mode Expected Results The existing VM should get migrated to other nodes. Verify the CRDs to see the maintenance mode is enabled. Comments Needs other test cases to be added If VM migration fails How does live migration work What happens if there are no schedulable resources on nodes Check CRDs on hosts On going into maintenance mode kubectl get virtualmachines &ndash;all-namespaces Kubectl get virtualmachines/name -o yaml On coming out of maintenance mode kubectl get virtualmachines &ndash;all-namespaces Kubectl get virtualmachines/name -o yaml Check that maintenance mode host isn&rsquo;t schedulable Fully provision all nodes and try to create a VM It should fail Migration with maintenance mode What if migration gets stuck, can you cancel VMs going to different hosts Canceling maintenance mode P1 Put in maintenance mode Check migration of VMs Check status of VMs modify filesystem on VMs Check status of host Take host out of maintenance mode Check status of host Migrate VMs back to host Check filesystem Create new VMs on host Check status of VMs + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Maintenance Mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The existing VM should get migrated to other nodes.</li> <li>Verify the CRDs to see the maintenance mode is enabled.</li> </ol> <h3 id="comments">Comments</h3> <ol> <li>Needs other test cases to be added</li> <li>If VM migration fails</li> <li>How does live migration work</li> <li>What happens if there are no schedulable resources on nodes <ul> <li>Check CRDs on hosts <ul> <li>On going into maintenance mode</li> <li>kubectl get virtualmachines &ndash;all-namespaces</li> </ul> </li> <li>Kubectl get virtualmachines/name -o yaml <ul> <li>On coming out of maintenance mode</li> <li>kubectl get virtualmachines &ndash;all-namespaces</li> </ul> </li> </ul> </li> <li>Kubectl get virtualmachines/name -o yaml <ul> <li>Check that maintenance mode host isn&rsquo;t schedulable <ul> <li>Fully provision all nodes and try to create a VM</li> </ul> </li> </ul> </li> <li>It should fail <ul> <li>Migration with maintenance mode</li> <li>What if migration gets stuck, can you cancel</li> <li>VMs going to different hosts</li> <li>Canceling maintenance mode</li> <li>P1 <ul> <li>Put in maintenance mode</li> <li>Check migration of VMs</li> <li>Check status of VMs</li> <li>modify filesystem on VMs</li> <li>Check status of host</li> <li>Take host out of maintenance mode</li> <li>Check status of host</li> <li>Migrate VMs back to host</li> <li>Check filesystem</li> <li>Create new VMs on host</li> <li>Check status of VMs</li> </ul> </li> </ul> </li> </ol> Verify network data template https://harvester.github.io/tests/manual/misc/1634-terms-and-conditions-link/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1634-terms-and-conditions-link/ - Related issues: #1634 Welcome screen asks to agree to T&amp;Cs for using Rancher not Harvester Verification Steps Install Harvester Go to management page and see last line (before Continue button) Verify link to SUSE EULA https://www.suse.com/licensing/eula/ Expected Results Link should go to SUSE EULA + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1634">#1634</a> Welcome screen asks to agree to T&amp;Cs for using Rancher not Harvester</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester</li> <li>Go to management page and see last line (before Continue button)</li> <li>Verify link to SUSE EULA <a href="https://www.suse.com/licensing/eula/">https://www.suse.com/licensing/eula/</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Link should go to SUSE EULA <img src="https://user-images.githubusercontent.com/83787952/145657167-2d8ebd33-14d6-4c78-a30f-37075b206219.png" alt="image"></li> </ol> Verify network data template https://harvester.github.io/tests/manual/templates/1655-network-data-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/1655-network-data-template/ - Related issues: #1655 When using a VM Template the Network Data in the template is not displayed Verification Steps Create new VM template with network data in advanced settings network: version: 1 config: - type: physical name: interface0 subnets: - type: static address: 10.84.99.0/24 gateway: 10.84.99.254 Create new VM and select template Verify that network data is in advanced network config Expected Results network data should show + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1655">#1655</a> When using a VM Template the Network Data in the template is not displayed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new VM template with network data in advanced settings</li> </ol> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: interface0 subnets: - type: static address: 10.84.99.0/24 gateway: 10.84.99.254 </code></pr Verify operations like Stop, restart, pause, download YAML, generate template (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/verify-operations-like-stop-restart-pause-download-yaml-generate-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/verify-operations-like-stop-restart-pause-download-yaml-generate-template/ - Take an existing VM and Press the appropriate buttons for the associated operations Stop Restart Pause Download YAML Generate Template Expected Results All operations should complete successfully + <ol> <li>Take an existing VM and Press the appropriate buttons for the associated operations <ul> <li>Stop</li> <li>Restart</li> <li>Pause</li> <li>Download YAML</li> <li>Generate Template</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All operations should complete successfully</li> </ol> Verify SSH key was added from Github during install https://harvester.github.io/tests/manual/authentication/verify-github-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/verify-github-ssh/ - Add ssh key from Github while installing the Harvester. Login Harvester with github. Expected Results User should be able to logout/login successfully. + <ol> <li>Add ssh key from Github while installing the Harvester.</li> <li>Login Harvester with github.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to logout/login successfully.</li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/advanced/1661-vm-force-reset-policy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/1661-vm-force-reset-policy/ - Related issues: #1661 vm-force-deletion-policy for vm-force-reset-policy Environment setup Setup an airgapped harvester Create a 3 node harvester cluster Verification Steps Navigate to advanced settings and edit vm-force-reset-policy Set reset policy to 60 Create VM Run health checks Shut down node that is running VM Check for when it starts to migrate to new Host Expected Results It should migrate after 60 seconds + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1661">#1661</a> vm-force-deletion-policy for vm-force-reset-policy</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create a 3 node harvester cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Navigate to advanced settings and edit vm-force-reset-policy <img src="https://user-images.githubusercontent.com/83787952/146448317-a259d86d-2020-4bed-adc2-f19ecf0d3fbb.png" alt="image"></li> <li>Set reset policy to <code>60</code></li> <li>Create VM</li> <li>Run health checks</li> <li>Shut down node that is running VM</li> <li>Check for when it starts to migrate to new Host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>It should migrate after 60 seconds</li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/virtual-machines/1660-volume-unit-vm-details/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1660-volume-unit-vm-details/ - Related issues: #1660 The volume unit on the vm details page is incorrect Verification Steps Create new .1G volume Create new VM Create with raw-image template Add opensuse base image Add .1G Volume Verify size in VM details on volume tab Expected Results Size should show as .1G + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1660">#1660</a> The volume unit on the vm details page is incorrect</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new .1G volume</li> <li>Create new VM</li> <li>Create with raw-image template</li> <li>Add opensuse base image</li> <li>Add .1G Volume</li> <li>Verify size in VM details on volume tab <img src="https://user-images.githubusercontent.com/83787952/145658516-73f5c72c-2543-46cd-9f90-8bc47f5ce2d4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Size should show as .1G</li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/volumes/1660-volume-unit-vm-details/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1660-volume-unit-vm-details/ - Related issues: #1660 The volume unit on the vm details page is incorrect Verification Steps Create new .1G volume Create new VM Create with raw-image template Add opensuse base image Add .1G Volume Verify size in VM details on volume tab Expected Results Size should show as .1G + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1660">#1660</a> The volume unit on the vm details page is incorrect</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new .1G volume</li> <li>Create new VM</li> <li>Create with raw-image template</li> <li>Add opensuse base image</li> <li>Add .1G Volume</li> <li>Verify size in VM details on volume tab <img src="https://user-images.githubusercontent.com/83787952/145658516-73f5c72c-2543-46cd-9f90-8bc47f5ce2d4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Size should show as .1G</li> </ol> Verify that VMs stay up when disks are evicted https://harvester.github.io/tests/manual/volumes/1334-evict-disks-check-vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1334-evict-disks-check-vms/ - Related issues #1334 Volumes fail with Scheduling Failure after evicting disc on multi-disc node #5307 Replicas should be evicted and rescheduled to other disks before removing extra disk Verification Steps Created 3 nodes Harvester. Added formatted disk (called disk A) to node0 VM in the harvester node page. Added disk tag test on following disk in the longhorn page. disk A of node0 root disk of node1 root disk of node2 Created storage class with disk tag test and replica 3. + <ul> <li>Related issues <ul> <li><a href="https://github.com/harvester/harvester/issues/1334">#1334</a> Volumes fail with Scheduling Failure after evicting disc on multi-disc node</li> <li><a href="https://github.com/harvester/harvester/issues/5307">#5307</a> Replicas should be evicted and rescheduled to other disks before removing extra disk</li> </ul> </li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Created 3 nodes Harvester.</p> </li> <li> <p>Added formatted disk (called disk A) to node0 VM in the harvester node page.</p> </li> <li> <p>Added disk tag <code>test</code> on following disk in the longhorn page.</p> Verify the external link at the bottom of the page https://harvester.github.io/tests/manual/ui/verify-bottom-links/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-bottom-links/ - Click all the external links available at the bottom of the page - Docs, Forums, Slack, File an issue. Click the Generate support bundle at the bottom of the page Expected Results The external links should take user to correct URL in new tab in the browser. The support bundle should be generated once the Generate support bundle. The progress should be shown while the bundle is getting generated. The Generated bundle should have all components logs and Yaml + <ol> <li>Click all the external links available at the bottom of the page - Docs, Forums, Slack, File an issue.</li> <li>Click the Generate support bundle at the bottom of the page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The external links should take user to correct URL in new tab in the browser.</li> <li>The support bundle should be generated once the Generate support bundle. The progress should be shown while the bundle is getting generated.</li> <li>The Generated bundle should have all components logs and Yaml</li> </ol> Verify the Filter on the Host page https://harvester.github.io/tests/manual/hosts/verify-filter-on-host-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-filter-on-host-page/ - Enter name of a host and verify the nodes get filtered out. Expected Results The edited name should be reflected on the host. + <ol> <li>Enter name of a host and verify the nodes get filtered out.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited name should be reflected on the host.</li> </ol> Verify the Harvester UI URL (e2e_fe) https://harvester.github.io/tests/manual/ui/verify-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-url/ - Navigate to the Harvester UI and verify the URL. Verify the Harvester icon on the left top corner Expected Results The URL should be the management ip + /dashboard redirect to login page if not login redirect to dashboard page if already login + <ol> <li>Navigate to the Harvester UI and verify the URL.</li> <li>Verify the Harvester icon on the left top corner</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The URL should be the management ip + /dashboard</li> <li>redirect to login page if not login</li> <li>redirect to dashboard page if already login</li> </ol> Verify the info of the node https://harvester.github.io/tests/manual/hosts/verify-node-info/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-node-info/ - Navigate to the hosts tab and verify the following. State Name Host IP CPU Memory Storage Size Age Expected Results All the data/status shown on the page should be correct. + <ol> <li>Navigate to the hosts tab and verify the following. <ul> <li>State</li> <li>Name</li> <li>Host IP</li> <li>CPU</li> <li>Memory</li> <li>Storage Size</li> <li>Age</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All the data/status shown on the page should be correct.</li> </ol> Verify the installation confirmation screen https://harvester.github.io/tests/manual/deployment/verify-installation-confirmation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-installation-confirmation/ - Verify all the details shown on the screen Expected Results The info should reflect all the user filled data. + <ol> <li>Verify all the details shown on the screen</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The info should reflect all the user filled data.</p> Verify the Installer Options https://harvester.github.io/tests/manual/deployment/verify-installer-options/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-installer-options/ - Verify the following options available while installing the Harvester is working Installation target Node IP Cluster token Password DNS Server VIP HTTP Proxy NTP Address Expected Results Should show all the disks available. Verify the min and max length acceptable for cluster token. Verify the password rule + <ol> <li>Verify the following options available while installing the Harvester is working <ul> <li>Installation target</li> <li>Node IP</li> <li>Cluster token</li> <li>Password</li> <li>DNS Server</li> <li>VIP</li> <li>HTTP Proxy</li> <li>NTP Address</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Should show all the disks available.</li> <li>Verify the min and max length acceptable for cluster token.</li> <li>Verify the password rule</li> </ul> Verify the left side menu (e2e_fe) https://harvester.github.io/tests/manual/ui/verify-left-menu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-left-menu/ - Check all the menu at the left side of the screen. Verify the preference and logout option is available at the right top of the screen Expected Results The menu should have Dashboard, Hosts, Virtual machines, Volumes, Images and Advance. The Advance menu should have sub menu Templates, backups, network, SSH keys, Users, Cloud config templates, Settings. Clicking on the menu should take user to the respective pages + <ol> <li>Check all the menu at the left side of the screen.</li> <li>Verify the preference and logout option is available at the right top of the screen</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The menu should have Dashboard, Hosts, Virtual machines, Volumes, Images and Advance.</li> <li>The Advance menu should have sub menu Templates, backups, network, SSH keys, Users, Cloud config templates, Settings.</li> <li>Clicking on the menu should take user to the respective pages</li> </ol> Verify the links which navigate to the internal pages https://harvester.github.io/tests/manual/ui/verify-internal-links/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-internal-links/ - Click the links available on the pages like on dashboard - host, virtual machines etc Verify the events and resources tabs presents in the pages e.g. - Dashboard, Virtual machines Expected Results The internal link should take user to the correct page in the same tab opened in the browser + <ol> <li>Click the links available on the pages like on dashboard - host, virtual machines etc</li> <li>Verify the events and resources tabs presents in the pages e.g. - Dashboard, Virtual machines</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The internal link should take user to the correct page in the same tab opened in the browser</li> </ol> Verify the options available for image https://harvester.github.io/tests/manual/images/verify-options-available-for-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/verify-options-available-for-image/ - Create vm with YAML using the menu option. Download Yaml Verify the downloaded Yaml file. Clone the Image Expected Results All user-specified fields must match what show on GUI: Namespace Name Description URL Labels + <ol> <li>Create vm with YAML using the menu option.</li> <li>Download Yaml</li> <li>Verify the downloaded Yaml file.</li> <li>Clone the Image</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All user-specified fields must match what show on GUI: <ul> <li>Namespace</li> <li>Name</li> <li>Description</li> <li>URL</li> <li>Labels</li> </ul> </li> </ol> Verify the Proxy configuration https://harvester.github.io/tests/manual/deployment/verify-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-proxy/ - Provide a valid proxy address, verify it works after installation is complete. Provide empty proxy address. Expected Results For empty proxy address, by default DHCP should provide the management url and it should navigate to the Harvester UI. + <ul> <li>Provide a valid proxy address, verify it works after installation is complete.</li> <li>Provide empty proxy address.</li> </ul> <h2 id="expected-results">Expected Results</h2> <p>For empty proxy address, by default DHCP should provide the management url and it should navigate to the Harvester UI.</p> Verify the state for Powered down node https://harvester.github.io/tests/manual/hosts/negative-verify-state-powered-down-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-verify-state-powered-down-node/ - Power down the node and check the state of the node in the cluster Expected Results The node state should show unavilable + <ol> <li>Power down the node and check the state of the node in the cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The node state should show unavilable</li> </ol> vGPU/SR-IOV GPU https://harvester.github.io/tests/manual/advanced/addons/2764-vgpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/2764-vgpu/ - Related issues: #1661 vGPU Support Pre-requisite Enable PCI devices Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC Go to the management interface of the new cluster Go to Advanced -&gt; PCI Devices Validate that the PCI devices aren&rsquo;t enabled Click the link to enable PCI devices Enable PCI devices in the linked addon page Wait for the status to change to Deploy Successful Navigate to the PCI devices page Validate that the PCI devices page is populated/populating with PCI devices Pre-requisite Enable vGPU This can only be ran on a bare metal Harvester cluster that has an Nvidia card that support vGPU. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2764">#1661</a> vGPU Support</li> </ul> <h1 id="pre-requisite-enable-pci-devices">Pre-requisite Enable PCI devices</h1> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Validate that the PCI devices aren&rsquo;t enabled</li> <li>Click the link to enable PCI devices</li> <li>Enable PCI devices in the linked addon page</li> <li>Wait for the status to change to <code>Deploy Successful</code></li> <li>Navigate to the PCI devices page</li> <li>Validate that the PCI devices page is populated/populating with PCI devices</li> </ol> <h1 id="pre-requisite-enable-vgpu">Pre-requisite Enable vGPU</h1> <p>This can only be ran on a bare metal Harvester cluster that has an Nvidia card that support vGPU. You will also need the Nvidia KVM driver and the Nvidia grid installer. These can be downloaded from Nvidia through your partner portal as outlined <a href="https://www.nvidia.com/en-us/drivers/vgpu-software-driver/">here</a></p> View log function on virtual machine https://harvester.github.io/tests/manual/virtual-machines/5266-view-log-function/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/5266-view-log-function/ - Related issues: #5266 [BUG] Click View Logs option on virtual machine dashboard can&rsquo;t display any log entry Category: Virtual Machines Verification Steps Create one virtual machines named vm1 in the Harvester virtual machine page Wait until the vm1 in running state Click the View Logs in the side option menu Check the log panel content of vm Click the Clear button Click the Download button Enter some query sting in the Filter field Click settings, change the Show the latest to different options Uncheck/Check the Wrap Lines Uncheck/Check the Show Timestamps Expected Results Should display the detailed log entries on the vm log panel including timestamp and content All existing logs would be cleaned up Ensure new logs will display on the panel Check can correctly download the log to the . + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5266">#5266</a> [BUG] Click View Logs option on virtual machine dashboard can&rsquo;t display any log entry</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machines</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create one virtual machines named <code>vm1</code> in the Harvester virtual machine page</li> <li>Wait until the <code>vm1</code> in running state</li> <li>Click the View Logs in the side option menu</li> <li>Check the log panel content of vm</li> <li>Click the <code>Clear</code> button</li> <li>Click the <code>Download</code> button</li> <li>Enter some query sting in the <code>Filter</code> field</li> <li>Click settings, change the <code>Show the latest</code> to different options</li> <li>Uncheck/Check the <code>Wrap Lines</code></li> <li>Uncheck/Check the <code>Show Timestamps</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Should display the detailed log entries on the vm log panel including timestamp and content <img src="https://harvester.github.io/tests/images/virtual-machines/5266-view-vm-log.png" alt="images/virtual-machines/5266-view-vm-log.png"></li> <li>All existing logs would be cleaned up</li> <li>Ensure new logs will display on the panel</li> <li>Check can correctly download the log to the <code>.log</code> file and contain all the details</li> <li>Check the log entries contains the filter string can display correctly</li> <li>Check each different options of <code>Show the latest</code> log option can display log according to the settings</li> <li>Check the log entries can be wrapped or unwrapped</li> <li>Check the log entries can display with/without timestamp</li> </ul> VIP configured in a VLAN network should be reached https://harvester.github.io/tests/manual/network/vip-configured-on-vlan-network-should-be-reached/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/vip-configured-on-vlan-network-should-be-reached/ - Related issue: #1424 VIP configured in a VLAN network can not be reached Category: Network Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Steps Enable virtual network with harvester-mgmt Open Network -&gt; Create a virtual network Provide network name and correct vlan id Open Route, use the default auto setting Create a VM and use the created route SSH to harvester node Ping the IP of the created VM Create a virutal network Provide network name and correct vlan id Open Route, use the manual setting Provide the CIDR and Gateway value Repeat step 5 - 7 Expected Results Check the auto route vlan can be detected with running status Check the manual route vlan can be detected with running status Check the VM can get IP based on auto or manual vlan route Check can ping VM IP from harvester node + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1424">#1424</a> VIP configured in a VLAN network can not be reached</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable virtual network with <code>harvester-mgmt</code></li> <li>Open Network -&gt; Create a virtual network</li> <li>Provide network name and correct vlan id <img src="https://user-images.githubusercontent.com/29251855/148182659-5b0f0d14-2654-4123-a417-4bd4e101b597.png" alt="image"></li> <li>Open Route, use the default <code>auto</code> setting <img src="https://user-images.githubusercontent.com/29251855/148182727-a445667c-fc78-4c83-a3d5-0238b8d2b17c.png" alt="image"></li> <li>Create a VM and use the created route</li> <li>SSH to harvester node</li> <li>Ping the IP of the created VM</li> <li>Create a virutal network</li> <li>Provide network name and correct vlan id</li> <li>Open Route, use the <code>manual</code> setting</li> <li>Provide the <code>CIDR</code> and <code>Gateway</code> value <img src="https://user-images.githubusercontent.com/29251855/148185885-b2c5b075-bd08-4fd6-97ad-7485a67e9339.png" alt="image"></li> <li>Repeat step 5 - 7</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check the <code>auto</code> route vlan can be detected with <code>running</code> status <img src="https://user-images.githubusercontent.com/29251855/148183159-1242ad24-ee44-4428-8592-abdfa5d863fc.png" alt="image"></li> <li>Check the <code>manual</code> route vlan can be detected with <code>running</code> status</li> <li>Check the VM can get IP based on <code>auto</code> or <code>manual</code> vlan route</li> <li>Check can ping VM IP from harvester node</li> </ol> VIP is accessibility with VLAN enabled on management port https://harvester.github.io/tests/manual/network/vip_vlan_mgmtport/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/vip_vlan_mgmtport/ - Ref: https://github.com/harvester/harvester/issues/1722 Verify Items VIP should be accessible when VLAN enabled on management port Case: Single Node enables VLAN on management port Install Harvester with single node Login to dashboard then navigate to Settings Edit vlan to enable VLAN on harvester-mgmt reboot the node after reboot, login to console Run the command should not contain any output sudo -s kubectl get pods -A --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' | grep harvester-network-controller-manager | xargs kubectl logs -n harvester-system | grep &quot;Failed to update lock&quot; Repeat step 4-6 with 10 times, should not have any error + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1722">https://github.com/harvester/harvester/issues/1722</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VIP should be accessible when VLAN enabled on management port</li> </ul> <h2 id="case-single-node-enables-vlan-on-management-port">Case: Single Node enables VLAN on management port</h2> <ol> <li>Install Harvester with single node</li> <li>Login to dashboard then navigate to Settings</li> <li>Edit <strong>vlan</strong> to enable VLAN on <code>harvester-mgmt</code></li> <li>reboot the node</li> <li>after reboot, login to console</li> <li>Run the command should not contain any output <ul> <li><code>sudo -s</code></li> <li><code>kubectl get pods -A --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' | grep harvester-network-controller-manager | xargs kubectl logs -n harvester-system | grep &quot;Failed to update lock&quot;</code></li> </ul> </li> <li>Repeat step 4-6 with 10 times, should not have any error</li> </ol> VIP Load balancer verification (e2e_be) https://harvester.github.io/tests/manual/deployment/verify-vip-load-balancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-vip-load-balancer/ - Case DHCP Install Harvester on one Node Install with VIP pulling from DHCP Verify that IP is assigned via DHCP Add at least one additional node Use VIP address as management address for adding node Finish install of additional nodes Create new VM Connect to VM via web console Case Static IP Install Harvester on one Node Install with VIP set statically Verify that IP is assigned correctly Add at least one additional node Use VIP address as management address for adding node Finish install of additional nodes Create new VM Connect to VM via web console Expected Results Install of all nodes should complete New nodes should show up in hosts list via web UI at VIP VMs should create Console should open + <h2 id="case-dhcp">Case DHCP</h2> <ol> <li>Install Harvester on one Node <ul> <li>Install with VIP pulling from DHCP</li> <li>Verify that IP is assigned via DHCP </li> </ul> </li> <li>Add at least one additional node <ul> <li>Use VIP address as management address for adding node</li> </ul> </li> <li>Finish install of additional nodes</li> <li>Create new VM</li> <li>Connect to VM via web console</li> </ol> <h2 id="case-static-ip">Case Static IP</h2> <ol> <li>Install Harvester on one Node <ul> <li>Install with VIP set statically</li> <li>Verify that IP is assigned correctly</li> </ul> </li> <li>Add at least one additional node <ul> <li>Use VIP address as management address for adding node</li> </ul> </li> <li>Finish install of additional nodes</li> <li>Create new VM</li> <li>Connect to VM via web console</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install of all nodes should complete</li> <li>New nodes should show up in hosts list via web UI at VIP</li> <li>VMs should create</li> <li>Console should open</li> </ol> virtualmachineimages.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachineimages.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachineimages.harvesterhci.io/ - GUI Create an image from GUI Create another image with the same name. The operation should fail with admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: A resource with the same name exists kube-api Create an image from the manifest: $ cat image.yaml --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineImage metadata: generateName: image- namespace: default spec: sourceType: download displayName: cirros-0.5.1-x86_64-disk2.img url: http://192.168.2.106/cirros-0.5.1-x86_64-disk.img $ kubectl create -f image.yml virtualmachineimage.harvesterhci.io/image-8dkbq created Try to create an image with the same manifest: $ kubectl create -f image. + <h3 id="gui">GUI</h3> <ol> <li>Create an image from GUI</li> <li>Create another image with the same name. The operation should fail with admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: A resource with the same name exists</li> </ol> <h3 id="kube-api">kube-api</h3> <ol> <li>Create an image from the manifest:</li> </ol> <pre tabindex="0"><code>$ cat image.yaml --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineImage metadata: generateName: image- namespace: default spec: sourceType: download displayName: cirros-0.5.1-x86_64-disk2.img url: http://192.168.2.106/cirros-0.5.1-x86_64-disk.img $ kubectl create -f image.yml virtualmachineimage.harvesterhci.io/image-8dkbq created </code></pr virtualmachinerestores.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachinerestores.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachinerestores.harvesterhci.io/ - GUI Setup a backup target Create a backup from a VM. Assume the VM name is vm-test Wait until backup is done Restore the backup to a VM, enter vm-test in the Virtual Machine Name field kube-api $ cat restore.yaml 1 --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineRestore metadata: name: restore-aaaa namespace: default spec: newVM: false target: apiGroup: kubevirt.io kind: VirtualMachine name: &#34;&#34; virtualMachineBackupName: test $ kubectl create -f restore.yaml Expected Results GUI The operation should fail with admission webhook &ldquo;validator. + <h3 id="gui">GUI</h3> <ol> <li>Setup a backup target</li> <li>Create a backup from a VM. Assume the VM name is vm-test</li> <li>Wait until backup is done</li> <li>Restore the backup to a VM, enter vm-test in the Virtual Machine Name field</li> </ol> <h3 id="kube-api">kube-api</h3> <pre tabindex="0"><code>$ cat restore.yaml 1 --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineRestore metadata: name: restore-aaaa namespace: default spec: newVM: false target: apiGroup: kubevirt.io kind: VirtualMachine name: &#34;&#34; virtualMachineBackupName: test $ kubectl create -f restore.yaml </code></pr virtualmachinetemplateversions.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachinetemplateversions.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachinetemplateversions.harvesterhci.io/ - kube-api List default templates: $ kubectl get virtualmachinetemplateversions.harvesterhci.io -n harvester-public GUI Go to Advanced -&gt; Templates page Create a new template and set it as the default version Try to delete the default version Expected Results kube-api Default templates should exist: NAME TEMPLATE_ID VERSION AGE iso-image-base-version 1 39m raw-image-base-version 1 39m windows-iso-image-base-version 1 39m GUI Creating a new template should succeed Deleting the default version of a template should fail with: admission webhook &ldquo;validator. + <h3 id="kube-api">kube-api</h3> <ol> <li>List default templates: <code>$ kubectl get virtualmachinetemplateversions.harvesterhci.io -n harvester-public</code></li> </ol> <h3 id="gui">GUI</h3> <ol> <li>Go to Advanced -&gt; Templates page</li> <li>Create a new template and set it as the default version</li> <li>Try to delete the default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="kube-api-1">kube-api</h3> <ol> <li>Default templates should exist:</li> </ol> <pre tabindex="0"><code>NAME TEMPLATE_ID VERSION AGE iso-image-base-version 1 39m raw-image-base-version 1 39m windows-iso-image-base-version 1 39m </code></pr VLAN Upgrade Test https://harvester.github.io/tests/manual/_incoming/2734-vlan-upgrade-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2734-vlan-upgrade-test/ - Related issues: #2734 [FEATURE] VLAN enhancement upgrading Category: Upgrade Verification Steps Test plan 1: harvester-mgmt vlan1 Prepare a 3 nodes v1.0.3 Harvester cluster Enable network on harvester-mgmt Create vlan id 1 Create two VMs, one set to vlan 1 and another use harvester-mgmt Perform manual upgrade to v1.1.0 Test plan 2: enps0 NIC with valid vlan Prepare a 3 nodes v1.0.3 Harvester cluster Enable network on another NIC (eg. enp129s0) Create vlan id 91 on enp129s0 Create two VMs, one set to vlan 91 and another use harvester-mgmt Perform manual upgrade to v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2734">#2734</a> [FEATURE] VLAN enhancement upgrading</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-harvester-mgmt-vlan1">Test plan 1: harvester-mgmt vlan1</h3> <ol> <li>Prepare a 3 nodes <code>v1.0.3</code> Harvester cluster</li> <li>Enable network on <code>harvester-mgmt</code></li> <li>Create vlan id <code>1</code></li> <li>Create two VMs, one set to vlan 1 and another use harvester-mgmt</li> <li>Perform manual upgrade to <code>v1.1.0</code></li> </ol> <h3 id="test-plan-2--enps0-nic-with-valid-vlan">Test plan 2: enps0 NIC with valid vlan</h3> <ol> <li>Prepare a 3 nodes <code>v1.0.3</code> Harvester cluster</li> <li>Enable network on another NIC (eg. <code>enp129s0</code>)</li> <li>Create vlan id <code>91</code> on <code>enp129s0</code></li> <li>Create two VMs, one set to vlan 91 and another use harvester-mgmt</li> <li>Perform manual upgrade to <code>v1.1.0</code></li> </ol> <h3 id="test-plan-3-bond-mode-using-harvester-config-file">Test plan 3: Bond mode using Harvester config file</h3> <ol> <li>Edit the ipxe-example add two additional NICs in Vagrantfile <pre tabindex="0"><code>harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; </code></pr VM Backup with metadata https://harvester.github.io/tests/manual/backup-and-restore/vm_backup_metadata/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/vm_backup_metadata/ - Ref: https://github.com/harvester/harvester/issues/988 Verify Items Metadata should be removed along with VM deleted Metadata should be synced after backup target switched Metadata can be used in new cluster Case: Metadata create and delete Install Harvester with any nodes Create an image for VM creation Setup NFS/S3 backup target Create a VM, then create a backup named backup1 File default-backup1.cfg should be exist in the backup target path &lt;backup root&gt;/harvester/vmbackups Delete the VM Backup backup1 File default-backup1. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/988">https://github.com/harvester/harvester/issues/988</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Metadata should be removed along with VM deleted</li> <li>Metadata should be synced after backup target switched</li> <li>Metadata can be used in new cluster</li> </ul> <h2 id="case-metadata-create-and-delete">Case: Metadata create and delete</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create an image for VM creation</li> <li>Setup NFS/S3 <strong>backup target</strong></li> <li>Create a VM, then create a backup named <code>backup1</code></li> <li>File <code>default-backup1.cfg</code> should be exist in the <strong>backup target</strong> path <code>&lt;backup root&gt;/harvester/vmbackups</code></li> <li>Delete the VM Backup <code>backup1</code></li> <li>File <code>default-backup1.cfg</code> should be removed</li> </ol> <h2 id="case-metadata-sync-after-backup-target-changed">Case: Metadata sync after backup target changed</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create an image for VM creation</li> <li>Setup NFS <strong>backup target</strong></li> <li>Create VM <code>vm1</code>, then create file <code>tmp</code> with content <code>first</code> in the VM</li> <li>Backup <code>vm1</code> named <code>backup1</code></li> <li>Append content <code>second</code> into <code>tmp</code> file in the VM <code>vm1</code></li> <li>Backup <code>vm1</code> named <code>backup2</code></li> <li>Switch <strong>backup target</strong> to S3</li> <li>Delete backups and VM <code>vm1</code> in the dashboard</li> <li>Backup Files should be kept in the former <strong>backup target</strong></li> <li>Swithc <strong>backup target</strong> back</li> <li>Backups should be loaded in Dashboard&rsquo;s Backup page</li> <li>Restore <code>backup1</code> to <code>vm-b1</code></li> <li><code>vm-b1</code> should contain file which was created in <strong>Step 4</strong></li> <li>Restore <code>backup2</code> to <code>vm-b2</code></li> <li><code>vm-b2</code> should contain file which was modified in <strong>step 6</strong></li> <li>Repeat <strong>Step 3</strong> to <strong>Step 16</strong> with following Backup ordering</li> </ol> <ul> <li>S3 -&gt; NFS</li> <li>NFS -&gt; NFS</li> <li>S3 -&gt; S3</li> </ul> <h2 id="case-backup-rebuild-in-new-cluster">Case: Backup rebuild in new cluster</h2> <ol> <li>Repeat <strong>Case: Metadata create and delete</strong> as cluster A to generate backup data</li> <li>Installer another Harvester with any nodes as cluster B</li> <li>setup <strong>backup-target</strong> which contained old backup data</li> <li><strong>Backup Targets</strong> in <em>Backups</em> should show <code>Ready</code> state for all backups. (this will take few mins depends on network connection)</li> <li>Create image for backup <ol> <li>The image <strong>MUST</strong> use the same <code>storageClassName</code> name as the backup created.</li> <li><code>storageClassName</code> can be found in backup&rsquo;s <code>volumeBackups</code> in the YAML definition.</li> <li><code>storageClassName</code> can be assigned by <code>metadata.name</code> when creating image via YAML. For example, when you assign <code>metadata.name</code> as <code>image-dgf27</code>, the <code>storageClassName</code> will be named as <code>longhorn-image-dgf27</code></li> </ol> </li> <li>Restore backup to new VM</li> <li>VM should started successfully</li> <li>VM should contain those data that it was taken backup</li> </ol> VM boot stress test https://harvester.github.io/tests/manual/_incoming/2906-vm-boot-stress-test-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2906-vm-boot-stress-test-/ - Related issues: #2906 [BUG] VM can’t boot due to filesystem corruption Category: Volume Verification Steps Create volume (Harvester, Longhorn storage class) Create volume from image Unmount volume from VM Delete volume in use and not in use Export volume to image Create VM from the exported image Edit volume to increase size Delete volume in use Clone volume Take volume snapshot Restore volume snapshot Utilize the E2E test in harvester/test repo to prepare a script to continues run step 1-11 at lease 100 runs Expected Results Pass more than 300 rounds of the I/O write test, Should Not encounter data corruption issue and VM is alive opensuse:~ # xfs_info /dev/vda3 meta-data=/dev/vda3 isize=512 agcount=13, agsize=653887 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0 = reflink=0 data = bsize=4096 blocks=7858427, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2906">#2906</a> [BUG] VM can’t boot due to filesystem corruption</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create volume (Harvester, Longhorn storage class)</li> <li>Create volume from image</li> <li>Unmount volume from VM</li> <li>Delete volume in use and not in use</li> <li>Export volume to image</li> <li>Create VM from the exported image</li> <li>Edit volume to increase size</li> <li>Delete volume in use</li> <li>Clone volume</li> <li>Take volume snapshot</li> <li>Restore volume snapshot</li> <li>Utilize the E2E test in harvester/test repo to prepare a script to continues run step 1-11 at lease 100 runs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Pass more than 300 rounds of the I/O write test, <strong>Should Not</strong> encounter data corruption issue and VM is alive <pre tabindex="0"><code>opensuse:~ # xfs_info /dev/vda3 meta-data=/dev/vda3 isize=512 agcount=13, agsize=653887 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0 = reflink=0 data = bsize=4096 blocks=7858427, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 </code></pr VM Import/Migration https://harvester.github.io/tests/manual/_incoming/2274-vm-import/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2274-vm-import/ - Related issues: #2274 [Feature] VM Import/Migration Category: Virtual Machine Test Information Test Environment: 1 node harvester on local kvm machine Harvester version: v1.1.0-rc1 Vsphere: 7.0 Openstack: Simulated using running devstack Download kubeconfig for harvester cluster Environment Setup Prepare Harvester master node Prepare vsphere setup (or use existing setup) Prepare a devstack cluster (Openstack 16.2) (stable/train) OpenStack Setup Prepare a baremetal or virtual machine to host the OpenStack service For automated installation on virtual machine, please refer to the cloud init user data in https://github. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2274">#2274</a> [Feature] VM Import/Migration</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="test-information">Test Information</h2> <p>Test Environment:</p> <ul> <li>1 node harvester on local kvm machine</li> <li>Harvester version: v1.1.0-rc1</li> <li>Vsphere: 7.0</li> <li>Openstack: Simulated using running devstack</li> <li>Download kubeconfig for harvester cluster</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare Harvester master node</li> <li>Prepare vsphere setup (or use existing setup)</li> <li>Prepare a devstack cluster (Openstack 16.2) (stable/train)</li> </ol> <h3 id="openstack-setup">OpenStack Setup</h3> <ol> <li>Prepare a baremetal or virtual machine to host the OpenStack service</li> <li>For automated installation on virtual machine, please refer to the <code>cloud init user data</code> in <a href="https://github.com/harvester/tests/issues/522#issuecomment-1654646620">https://github.com/harvester/tests/issues/522#issuecomment-1654646620</a></li> <li>For manual installation, we can also follow the command in the <code>cloud init user data</code></li> </ol> <h3 id="openstack-troubleshooting">OpenStack troubleshooting</h3> <p>If you failed create volume with the following error message <code>Error: Failed to perform requested operation on instance &quot;opensuse&quot;, the instance has an error status: Please try again later [Error: Build of instance 289d8c95-fd99-42a4-8eab-3a522e891463 aborted: Invalid input received: Invalid image identifier or unable to access requested image. (HTTP 400) (Request-ID: req-248baac7-a2de-4c51-9817-de653a548e3b)].</code></p> VM IP addresses should be labeled per network interface https://harvester.github.io/tests/manual/_incoming/2032-2370-vm-ip-lableled-per-network-interface/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2032-2370-vm-ip-lableled-per-network-interface/ - Related issues: #2032 [BUG] VM IP addresses should be labeled per network interface Related issues: #2370 [backport v1.0.3] VM IP addresses should be labeled per network interface Category: Virtual Machine Verification Steps Enable network with magement-mgmt interface Create vlan network vlan1 with id 1 Check the IP address on the VM page Create a VM with harvester-mgmt network Import Harvester in Rancher Provision a RKE2 cluster from Rancher Check the IP address on the VM page Expected Results Now the VM list only show IP which related to user access. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2032">#2032</a> [BUG] VM IP addresses should be labeled per network interface</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2370">#2370</a> [backport v1.0.3] VM IP addresses should be labeled per network interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable network with magement-mgmt interface</li> <li>Create vlan network <code>vlan1</code> with id <code>1</code></li> <li>Check the IP address on the VM page</li> <li>Create a VM with <code>harvester-mgmt</code> network</li> <li>Import Harvester in Rancher</li> <li>Provision a RKE2 cluster from Rancher</li> <li>Check the IP address on the VM page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Now the VM list only show IP which related to user access.</li> <li>And provide hover message on each displayed IP address <img src="https://user-images.githubusercontent.com/29251855/173749441-06fdad41-147a-4703-b19f-eafb1af9f18d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/173750324-9f26bcd2-024c-428f-a8bd-2a564c6078f2.png" alt="image"></li> </ul> VM label names consistentency before and after the restore https://harvester.github.io/tests/manual/_incoming/2662-vm-label-names-consistentency-after-the-restore/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2662-vm-label-names-consistentency-after-the-restore/ - Related issues: #2662 [BUG] VM label names should be consistent before and after the restore task is done Category: Network Verification Steps Create a VM named ubuntu Check the label name in virtual machine yaml content, label marked with harvesterhci.io/vmName Setup the S3 backup target Take a S3 backup with name After the backup task is done, delete the current VM Restore VM from the backup with the same name ubuntu (Create New) Check the yaml content after VM fully operated Expected Results The vm lable name is consistent to display harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2662">#2662</a> [BUG] VM label names should be consistent before and after the restore task is done</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM named <code>ubuntu</code></li> <li>Check the label name in virtual machine yaml content, label marked with <code>harvesterhci.io/vmName</code> <img src="https://user-images.githubusercontent.com/29251855/188374691-b36db1bc-2e2e-447b-96e1-699aa5e0ffee.png" alt="image"></li> <li>Setup the S3 backup target</li> <li>Take a S3 backup with name</li> <li>After the backup task is done, delete the current VM</li> <li>Restore VM from the backup with the same name <code>ubuntu</code> (Create New) <img src="https://user-images.githubusercontent.com/29251855/188378123-9af171af-c992-4e78-bdbb-8627903502ff.png" alt="image"></li> <li>Check the yaml content after VM fully operated</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The vm lable name is consistent to display <code>harvesterhci.io/vmName</code> after restore from the backup.</p> VM on error state https://harvester.github.io/tests/manual/virtual-machines/vm_on_error_state/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_on_error_state/ - Ref: https://github.com/harvester/harvester/issues/1446 https://github.com/harvester/harvester/issues/982 Verify Items Error message should displayed when VM can&rsquo;t be scheduled VM&rsquo;s state should be changed when host is down Case: Create a VM that no Node can host it Install Harvester with any nodes download a image to create VM create a VM with over-commit (consider to over-provisioning feature, double or triple the host resource would be more reliable.) VM should shows Starting state, and an alart icon shows aside. + <p>Ref:</p> <ul> <li><a href="https://github.com/harvester/harvester/issues/1446">https://github.com/harvester/harvester/issues/1446</a></li> <li><a href="https://github.com/harvester/harvester/issues/982">https://github.com/harvester/harvester/issues/982</a></li> </ul> <h2 id="verify-items">Verify Items</h2> <ul> <li>Error message should displayed when VM can&rsquo;t be scheduled</li> <li>VM&rsquo;s <strong>state</strong> should be changed when host is down</li> </ul> <h2 id="case-create-a-vm-that-no-node-can-host-it">Case: Create a VM that no Node can host it</h2> <ol> <li>Install Harvester with any nodes</li> <li>download a image to create VM</li> <li>create a VM with over-commit (consider to over-provisioning feature, double or triple the host resource would be more reliable.)</li> <li>VM should shows <strong>Starting</strong> state, and an alart icon shows aside.</li> <li>hover to the icon, pop-up message should display messages like <code>0/N nodes are available: n insufficient ...</code></li> </ol> <h2 id="case-vms-state-changed-to-not-ready-when-the-host-is-down">Case: VM&rsquo;s state changed to <strong>Not Ready</strong> when the host is down</h2> <ol> <li>Install Harvester with 2+ nodes</li> <li>Create an Image for VM creation</li> <li>Create a VM and wait until state becomes <strong>Running</strong></li> <li>Reboot the node which hosting the VM</li> <li>Node&rsquo;s <em>State</em> should be <code>In Progress</code> in <em><strong>Hosts</strong></em> page</li> <li>VM&rsquo;s <em>State</em> should be <code>Not Ready</code> in <em><strong>Virtual Machines</strong></em> page</li> </ol> VM scheduling on Specific node (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/vm_schedule_on_node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_schedule_on_node/ - Ref: https://github.com/harvester/harvester/issues/1350 Verify Items Node which is not active should not be listed in Node Scheduling list Case: Schedule VM on the Node which is Enable Maintenance Mode Install Harvester with at least 2 nodes Login and Navigate to Virtual Machines Create VM and Select Run VM on specific node(s)... All Active nodes should in the list Navigate to Host and pick node(s) to Enable Maintenance Mode Make sure Node(s) state changed into Maintenance Mode Repeat step 2 and 3 Picked Node(s) should not in the list Revert picked Node(s) to back to state of Active Repeat step 2 to 4 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1350">https://github.com/harvester/harvester/issues/1350</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Node which is not active should not be listed in <strong>Node Scheduling</strong> list</li> </ul> <h2 id="case-schedule-vm-on-the-node-which-is-enable-maintenance-mode">Case: Schedule VM on the Node which is <strong>Enable Maintenance Mode</strong></h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Login and Navigate to <em>Virtual Machines</em></li> <li>Create VM and Select <code>Run VM on specific node(s)...</code></li> <li>All <em><strong>Active</strong></em> nodes should in the list</li> <li>Navigate to <em>Host</em> and pick node(s) to <strong>Enable Maintenance Mode</strong></li> <li>Make sure Node(s) state changed into <em><strong>Maintenance Mode</strong></em></li> <li>Repeat step 2 and 3</li> <li>Picked Node(s) should not in the list</li> <li>Revert picked Node(s) to back to state of <em><strong>Active</strong></em></li> <li>Repeat step 2 to 4</li> </ol> VM Snapshot support https://harvester.github.io/tests/manual/_incoming/553_vm_snapshot_support/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/553_vm_snapshot_support/ - Ref: https://github.com/harvester/harvester/issues/553 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127.0.0.1 | tee -a test vdb/test Navigate to Virtual Machines page, click Take Snapshot button on vm1&rsquo;s details, named vm1s1 Execute sync on vm1 and Take Snapshot named vm1s2 Interrupt ping. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/553">https://github.com/harvester/harvester/issues/553</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a test vdb/test</code></li> </ul> </li> <li>Navigate to <em>Virtual Machines</em> page, click <strong>Take Snapshot</strong> button on <code>vm1</code>&rsquo;s details, named <code>vm1s1</code></li> <li>Execute <code>sync</code> on <code>vm1</code> and <strong>Take Snapshot</strong> named <code>vm1s2</code></li> <li>Interrupt <code>ping...</code> command and <code>rm test &amp;&amp; sync</code>, then <strong>Take Snapshot</strong> named <code>vm1s3</code></li> <li>Restore 3 snapshots into <strong>New</strong> VM: <code>vm1s1r</code>, <code>vm1s2r</code> and <code>vm1s3r</code></li> <li>Content of <code>test</code> and <code>vdb/test</code> should be the same in VM, and different in other restored VMs.</li> <li>Restore snapshots with <strong>Replace Existing</strong></li> <li>Content of <code>test</code> and <code>vdb/test</code> in restored <code>vm1</code> from the snapshot, should be the same as the VM restored with the same snapshot.</li> </ol> VM template is not working with Node scheduling https://harvester.github.io/tests/manual/_incoming/2244_vm_template_is_not_working_with_node_scheduling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2244_vm_template_is_not_working_with_node_scheduling/ - Ref: https://github.com/harvester/harvester/issues/2244 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create VM with Multiple Instance and Use VM Template, In Node Scheduling tab, select Run VM on specific node(s) Created VMs should be scheduled on the specific node + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2244">https://github.com/harvester/harvester/issues/2244</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177742575-31730953-5ffd-4018-b5ce-1b1e487ee14c.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create VM with <em><strong>Multiple Instance</strong></em> and <em><strong>Use VM Template</strong></em>, In <strong>Node Scheduling</strong> tab, select <code>Run VM on specific node(s)</code></li> <li>Created VMs should be scheduled on the specific node</li> </ol> VM's CPU maximum limitation https://harvester.github.io/tests/manual/virtual-machines/vm_cpu_limits/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_cpu_limits/ - Ref: https://github.com/harvester/harvester/issues/1565 Verify Items VM&rsquo;s maximum CPU amount should not have limitation. Case: Create VM with large CPU amount Install harvester with any nodes Create image for VM creation Create a VM with vCPU over than 100 Start VM and verify lscpu shows the same amount + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1565">https://github.com/harvester/harvester/issues/1565</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VM&rsquo;s maximum CPU amount should not have limitation.</li> </ul> <h2 id="case-create-vm-with-large-cpu-amount">Case: Create VM with large CPU amount</h2> <ol> <li>Install harvester with any nodes</li> <li>Create image for VM creation</li> <li>Create a VM with vCPU over than 100</li> <li>Start VM and verify <code>lscpu</code> shows the same amount</li> </ol> VMIs created from VM Template don't have LiveMigrate evictionStrategy set https://harvester.github.io/tests/manual/_incoming/2357_vmis_created_from_vm_template_do_nott_have_livemigrate_evictionstrategy_set/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2357_vmis_created_from_vm_template_do_nott_have_livemigrate_evictionstrategy_set/ - Ref: https://github.com/harvester/harvester/issues/2357 Verify Steps: Install Harvester with at least 2 nodes Create Image for VM Creation Navigate to Advanced/Templates and create a template t1 Create VM vm1 from template t1 Edit YAML of vm1, field spec.template.spec.evictionStrategy should be LiveMigrate Enable Maintenance Mode on the host which hosting vm1 vm1 should start migrating automatically Migration should success + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2357">https://github.com/harvester/harvester/issues/2357</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create Image for VM Creation</li> <li>Navigate to <em>Advanced/Templates</em> and create a template <code>t1</code></li> <li>Create VM <code>vm1</code> from template <code>t1</code></li> <li>Edit YAML of <code>vm1</code>, field <code>spec.template.spec.evictionStrategy</code> should be <code>LiveMigrate</code></li> <li>Enable Maintenance Mode on the host which hosting <code>vm1</code></li> <li><code>vm1</code> should start migrating automatically</li> <li>Migration should success</li> </ol> VMs can't start if a node contains more than ~60 VMs https://harvester.github.io/tests/manual/_incoming/2722_vms_can_not_start_if_a_node_contains_more_than_60_vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2722_vms_can_not_start_if_a_node_contains_more_than_60_vms/ - Ref: https://github.com/harvester/harvester/issues/2722 Verify Steps: Install Harvester with any nodes Login to console, execute sysctl -a | grep aio, the value of fs.aio-max-nr should be 1048576 Update the value by executing: mkdir -p /usr/local/lib/sysctl.d/ cat &gt; /usr/local/lib/sysctl.d/harvester.conf &lt;&lt;EOF fs.aio-max-nr = 61440 EOF sysctl --system Execute sysctl -a | grep aio, the value of fs.aio-max-nr should be 61440 Reboot the node then execute sysctl -a | grep aio, the value of fs.aio-max-nr should still be 61440 Create an image for VM creation Create 60 VMs and schedule on the node which updated fs. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2722">https://github.com/harvester/harvester/issues/2722</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192251104-7a53a1a9-260d-4e90-aade-1b3e7c11cc52.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to console, execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should be <code>1048576</code></li> <li>Update the value by executing:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>mkdir -p /usr/local/lib/sysctl.d/ </span></span><span style="display:flex;"><span>cat &gt; /usr/local/lib/sysctl.d/harvester.conf <span style="color:#e6db74">&lt;&lt;EOF </span></span></span><span style="display:flex;"><span><span style="color:#e6db74">fs.aio-max-nr = 61440 </span></span></span><span style="display:flex;"><span><span style="color:#e6db74">EOF</span> </span></span><span style="display:flex;"><span>sysctl --system </span></span></code></pre></div><ol> <li>Execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should be <code>61440</code></li> <li>Reboot the node then execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should still be <code>61440</code></li> <li>Create an image for VM creation</li> <li>Create 60 VMs and schedule on the node which updated <code>fs.aio-max-nr</code></li> <li>Update <code>fs.aio-max-nr</code> to <code>1048576</code> in <code>/usr/local/lib/sysctl.d/harvester.conf</code> and execute <code>sysctl --system</code></li> <li>VMs should started successfully or Stopping with error message <code>Too many pods</code></li> </ol> Volume size should be editable on derived template https://harvester.github.io/tests/manual/templates/derived_template_configure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/derived_template_configure/ - Ref: https://github.com/harvester/harvester/issues/1711 Verify Items Volume size can be changed when creating a derived template Case: Update volume size on new template derived from exist template Install Harvester with any Nodes Login to Dashboard Create Image for Template Creation Create Template T1 with Image Volume and additional Volume Modify Template T1 with update Volume size Volume size should be editable Click Save, then edit new version of T1 Volume size should be updated as expected + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1711">https://github.com/harvester/harvester/issues/1711</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Volume size can be changed when creating a derived template</li> </ul> <h2 id="case-update-volume-size-on-new-template-derived-from-exist-template">Case: Update volume size on new template derived from exist template</h2> <ol> <li>Install Harvester with any Nodes</li> <li>Login to Dashboard</li> <li>Create Image for Template Creation</li> <li>Create Template <code>T1</code> with <em>Image Volume</em> and additional <em>Volume</em></li> <li>Modify Template <code>T1</code> with update <em>Volume</em> size</li> <li>Volume size should be editable</li> <li>Click Save, then edit new version of <code>T1</code></li> <li>Volume size should be updated as expected</li> </ol> VolumeSnapshot Management https://harvester.github.io/tests/manual/_incoming/2296_volumesnapshot_management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2296_volumesnapshot_management/ - Ref: https://github.com/harvester/harvester/issues/2296 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm vm1 and start it *Take Snapshot on vm1 named vm1s1 Navigate to Volumes, click disks of vm1 then move to Snapshots tab, volume of snapshot vm1s1 should not displayed Navigate to Advanced/Volume Snapshots, volumes of snapshot vm1s1 should not displayed Navigate to Advanced/VM Snapshots, snapshot vm1s1 should displayed + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2296">https://github.com/harvester/harvester/issues/2296</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create vm <code>vm1</code> and start it</li> <li>*<em>Take Snapshot</em> on <code>vm1</code> named <code>vm1s1</code></li> <li>Navigate to <em>Volumes</em>, click disks of <code>vm1</code> then move to <strong>Snapshots</strong> tab, volume of snapshot <code>vm1s1</code> should not displayed</li> <li>Navigate to <em>Advanced/Volume Snapshots</em>, volumes of snapshot <code>vm1s1</code> should not displayed</li> <li>Navigate to <em>Advanced/VM Snapshots</em>, snapshot <code>vm1s1</code> should displayed</li> </ol> Wrong mgmt bond MTU size during initial ISO installation https://harvester.github.io/tests/manual/_incoming/2437_wrong_mgmt_bond_mtu_size_during_initial_iso_installation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2437_wrong_mgmt_bond_mtu_size_during_initial_iso_installation/ - Ref: https://github.com/harvester/harvester/issues/2437 Verify Steps: Install Harvester via ISO and configure IPv4 Method with static Inputbox MTU (Optional) should be available and optional Configured MTU should reflect to the port&rsquo;s MTU after installation + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2437">https://github.com/harvester/harvester/issues/2437</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192757588-73484301-07e7-4a37-9d1e-cbcada9b5774.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/192758868-422887df-557c-4d8c-9ee8-2ab0f863f97a.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester via ISO and configure <strong>IPv4 Method</strong> with <em>static</em></li> <li>Inputbox <code>MTU (Optional)</code> should be available and optional</li> <li>Configured MTU should reflect to the port&rsquo;s MTU after installation</li> </ol> Zero downtime upgrade https://harvester.github.io/tests/manual/_incoming/1707-zero-downtime-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1707-zero-downtime-upgrade/ - Related issues: #1707 [BUG] Zero downtime upgrade stuck in &ldquo;Waiting for VM live-migration or shutdown&hellip;&rdquo; Category: Upgrade Verification Steps Create a ubuntu image from URL Enable Network with management-mgmt Create a virtual network vlan1 with id 1 Setup backup target Create a VM backup Follow the guide to do upgrade test Expected Results Can upgrade correctly with all VMs remain in running + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1707">#1707</a> [BUG] Zero downtime upgrade stuck in &ldquo;Waiting for VM live-migration or shutdown&hellip;&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a ubuntu image from URL</li> <li>Enable Network with management-mgmt</li> <li>Create a virtual network vlan1 with id 1</li> <li>Setup backup target</li> <li>Create a VM backup</li> <li>Follow the <a href="https://github.com/harvester/docs/blob/main/docs/upgrade/automatic.md">guide</a> to do upgrade test <img src="https://user-images.githubusercontent.com/29251855/166428121-391f5321-ec8e-46ce-9a96-ea92f04b3907.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/166429966-b08cea0e-c457-41b2-a647-b6d3ac00aa58.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can upgrade correctly with all VMs remain in running <img src="https://user-images.githubusercontent.com/29251855/166430303-376d9e30-bf92-49eb-b3e2-8eeeb2375702.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/166430680-bb9e14fe-7da5-4b73-9ec8-47a780b4914c.png" alt="image"></li> </ol> diff --git a/integration/modules/skel_skel_spec.html b/integration/modules/skel_skel_spec.html index 00e37211c..90f150b95 100644 --- a/integration/modules/skel_skel_spec.html +++ b/integration/modules/skel_skel_spec.html @@ -1,11 +1,11 @@ -skel/skel.spec | Cypress Integration Tests for Harvester
Options
All
  • Public
  • Public/Protected
  • All
Menu

Index

Functions

  • changePassword(): void
  • +skel/skel.spec | Cypress Integration Tests for Harvester
    Options
    All
    • Public
    • Public/Protected
    • All
    Menu

    Index

    Functions

    • changePassword(): void
      1. Login
      2. Change Password
      3. Log out
      4. Login with new Password
      -
      notimplemented

      Returns void

    • deleteUser(): void
    • deleteUser(): void
      1. Log in as admin
      2. Navigate to user admin page
      3. @@ -14,7 +14,7 @@
      4. Try to log in as deleted user
      5. Verify that login fails
      -
      notimplemented

      Returns void

    • testSkelTest(): void
    • testSkelTest(): void
      1. Login to the page
      2. Edit the Type
      3. diff --git a/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html b/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html index 50375914d..cac653824 100644 --- a/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html +++ b/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html @@ -1,4 +1,4 @@ -testcases/VM settings/cloud-config-templates.spec | Cypress Integration Tests for Harvester
        Options
        All
        • Public
        • Public/Protected
        • All
        Menu

        Index

        Functions

        Functions

        • CheckUserData(): void
        • +testcases/VM settings/cloud-config-templates.spec | Cypress Integration Tests for Harvester
          Options
          All
          • Public
          • Public/Protected
          • All
          Menu

          Index

          Functions

          Functions

          • CheckUserData(): void
            1. Login
            2. Navigate to the cloud template create page
            3. diff --git a/integration/modules/testcases_VM_settings_ssh_keys_spec.html b/integration/modules/testcases_VM_settings_ssh_keys_spec.html index f6ecf6e59..51683773b 100644 --- a/integration/modules/testcases_VM_settings_ssh_keys_spec.html +++ b/integration/modules/testcases_VM_settings_ssh_keys_spec.html @@ -1,4 +1,4 @@ -testcases/VM settings/ssh-keys.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckCreateSsh(): void
              • PresetSsh(): void

              Legend

              • Function

              Settings

              Theme

              Generated using TypeDoc

              \ No newline at end of file diff --git a/integration/modules/testcases_networks_network_spec.html b/integration/modules/testcases_networks_network_spec.html index 70b347013..d66e24a83 100644 --- a/integration/modules/testcases_networks_network_spec.html +++ b/integration/modules/testcases_networks_network_spec.html @@ -1,4 +1,4 @@ -testcases/networks/network.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckCreateNetwork(): void
              • CreateVlan1(): void

              Legend

              • Function

              Settings

              Theme

              Generated using TypeDoc

              \ No newline at end of file diff --git a/integration/modules/testcases_virtualmachines_virtual_machine_spec.html b/integration/modules/testcases_virtualmachines_virtual_machine_spec.html index 933686f33..f151a3214 100644 --- a/integration/modules/testcases_virtualmachines_virtual_machine_spec.html +++ b/integration/modules/testcases_virtualmachines_virtual_machine_spec.html @@ -1,4 +1,4 @@ -testcases/virtualmachines/virtual-machine.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckMultiVMScheduler(): void
              • +testcases/virtualmachines/virtual-machine.spec | Cypress Integration Tests for Harvester
                Options
                All
                • Public
                • Public/Protected
                • All
                Menu

                Index

                Functions

                • CheckMultiVMScheduler(): void
                • DeleteVMWithImage(): void
                • DeleteVMWithImage(): void
                  1. Create vm “vm-1”
                  2. Create a image “img-1” by export the volume used by vm “vm-1”
                  3. diff --git a/manual/_incoming/index.xml b/manual/_incoming/index.xml index 96d05a2c6..b2d6b7fce 100644 --- a/manual/_incoming/index.xml +++ b/manual/_incoming/index.xml @@ -12,805 +12,805 @@ https://harvester.github.io/tests/manual/_incoming/2715_adapt_alertmanager_to_dedicated_storage_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2715_adapt_alertmanager_to_dedicated_storage_network/ - Ref: https://github.com/harvester/harvester/issues/2715 criteria PVCs (alertmanager/grafana/Prometheus) will attach back after dedicated storage network switched. Verify Steps: Install Harvester with any nodes Navigate to Networks -&gt; Cluster Networks/Configs, create Cluster Network named vlan, create Network Config for all nodes Navigate to Advanced -&gt; Settings, edit storage-network Select Enable then select vlan as cluster network, fill in VLAN ID and IP Range Wait until error message (displayed under storage network setting) disappeared Navigate to Monitoring &amp; Logging -&gt; Monitoring -&gt; Configuration Dashboard of Prometheus Graph, Grafana and Altertmanager should able to access, and should contain old data. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2715">https://github.com/harvester/harvester/issues/2715</a></p> <h3 id="criteria">criteria</h3> <p>PVCs (alertmanager/grafana/Prometheus) will attach back after dedicated storage network switched.</p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan</code>, create <strong>Network Config</strong> for all nodes</li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Wait until error message (displayed under <em>storage network</em> setting) disappeared</li> <li>Navigate to <em>Monitoring &amp; Logging -&gt; Monitoring -&gt; Configuration</em></li> <li>Dashboard of Prometheus Graph, Grafana and Altertmanager should able to access, and should contain old data.</li> </ol> Add backup-taget connection status https://harvester.github.io/tests/manual/_incoming/2631_add_backup-taget_connection_status/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2631_add_backup-taget_connection_status/ - Ref: https://github.com/harvester/harvester/issues/2631 Verified this feature has been implemented. Test Information Environment: qemu/KVM 2 nodes Harvester Version: master-032742f0-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings Setup a invalid NFS/S3 backup-target, then click Test connection button, error message should displayed Setup a valid NFS/S3 backup-target, then click Test connection button, notify message should displayed Navigate to Advanced/VM Backups, notify message should NOT displayed Navigate to Advanced/Settings and stop the backup-target server, then navigate to Advanced/VM Backups, error message should displayed + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2631">https://github.com/harvester/harvester/issues/2631</a></p> <p>Verified this feature has been implemented.</p> <p><img src="https://user-images.githubusercontent.com/5169694/190369936-c07b0a5f-8685-4813-8108-1032caf09183.png" alt="image"></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 2 nodes</strong></li> <li>Harvester Version: <strong>master-032742f0-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Advanced/Settings</em></li> <li>Setup a invalid NFS/S3 backup-target, then click <strong>Test connection</strong> button, error message should displayed</li> <li>Setup a valid NFS/S3 backup-target, then click <strong>Test connection</strong> button, notify message should displayed</li> <li>Navigate to <em>Advanced/VM Backups</em>, notify message should NOT displayed</li> <li>Navigate to <em>Advanced/Settings</em> and stop the backup-target server, then navigate to <em>Advanced/VM Backups</em>, error message should displayed</li> </ol> Add extra disks by using raw disks https://harvester.github.io/tests/manual/_incoming/extra-disk-using-raw-disk/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/extra-disk-using-raw-disk/ - Prepare a disk (with WWN) and attach it to the node. Navigate to &ldquo;Host&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo; and open the dropdown menu &ldquo;Add disks&rdquo;. Choose a disk to add, e.g. /dev/sda but not /dev/sda1. Expected Results The raw disk shall be schedulable as a longhorn disk as a whole (without any partition). Ths raw disk shall be in provisioned phase. Reboot the host and the disk shall be reattached and added back as a longhorn disk. + <ol> <li>Prepare a disk (with WWN) and attach it to the node.</li> <li>Navigate to &ldquo;Host&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo; and open the dropdown menu &ldquo;Add disks&rdquo;.</li> <li>Choose a disk to add, e.g. <code>/dev/sda</code> but not <code>/dev/sda1</code>.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The raw disk shall be schedulable as a longhorn disk as a whole (without any partition).</li> <li>Ths raw disk shall be in <code>provisioned</code> phase.</li> <li>Reboot the host and the disk shall be reattached and added back as a longhorn disk.</li> </ol> Add websocket disconnect notification https://harvester.github.io/tests/manual/_incoming/2186_add_websocket_disconnect_notification/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2186_add_websocket_disconnect_notification/ - Ref: https://github.com/harvester/harvester/issues/2186 Verify Steps: Install Harvester with at least 2 nodes Login to Dashboard via Node IP Navigate to Advanced/Settings and update ui-index to https://releases.rancher.com/harvester-ui/dashboard/release-harvester-v1.0/index.html and force refresh to make it applied. restart the Node which holding the IP Notification of websocket disconnected should appeared + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2186">https://github.com/harvester/harvester/issues/2186</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177529443-a9478e33-a955-4b48-8485-ab6eabbf3824.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Login to Dashboard via Node IP</li> <li>Navigate to <em>Advanced/Settings</em> and update <strong>ui-index</strong> to <code>https://releases.rancher.com/harvester-ui/dashboard/release-harvester-v1.0/index.html</code> and force refresh to make it applied.</li> <li>restart the Node which holding the IP</li> <li>Notification of websocket disconnected should appeared</li> </ol> Alertmanager supports main stream receivers https://harvester.github.io/tests/manual/_incoming/2521-alertmanager-supports-main-stream-receivers/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2521-alertmanager-supports-main-stream-receivers/ - Related issues: #2521 [FEATURE] Alertmanager supports main stream receivers Category: Alter manager Verification Steps Prepare another VM or machine have the same subnet with the Harvester Prepare a webhook server on the VM, reference to https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md You may need to install python3 web package, refer to https://webpy.org/install Run export PORT=8094 on the webhook server VM Launch the webhook server python3 simple-webhook-server.py davidtclin@ubuntu-clean:~$ python3 simple-webhook-server.py usage: export PORT=1234 to set http server port number as 1234 start a simple webhook server, PORT 8094 @ 2022-09-21 16:39:58. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2521">#2521</a> [FEATURE] Alertmanager supports main stream receivers</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Alter manager</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare another VM or machine have the same subnet with the Harvester</li> <li>Prepare a webhook server on the VM, reference to <a href="https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md">https://github.com/w13915984028/harvester-develop-summary/blob/main/test-log-event-audit-with-webhook-server.md</a></li> <li>You may need to install python3 web package, refer to <a href="https://webpy.org/install">https://webpy.org/install</a></li> <li>Run <code>export PORT=8094</code> on the webhook server VM</li> <li>Launch the webhook server <code>python3 simple-webhook-server.py</code> <pre tabindex="0"><code>davidtclin@ubuntu-clean:~$ python3 simple-webhook-server.py usage: export PORT=1234 to set http server port number as 1234 start a simple webhook server, PORT 8094 @ 2022-09-21 16:39:58.706792 http://0.0.0.0:8094/ </code></pr All Namespace filtering in VM list https://harvester.github.io/tests/manual/_incoming/2578-all-namespace-filtering/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2578-all-namespace-filtering/ - Related issues: #2578 [BUG] When first entering the harvester cluster from Virtualization Managements, some vm&rsquo;s in namespace are not shown in the list Category: UI Verification Steps Create a harvester cluster Create a VM in the default namespace Creating a Namespace (eg: test-vm) Import the Harvester cluster in Rancher access to the harvester cluster from Virtualization Management click Virtual Machines tab Expected Results test-vm-1 should also be shown in the list + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2578">#2578</a> [BUG] When first entering the harvester cluster from Virtualization Managements, some vm&rsquo;s in namespace are not shown in the list</li> </ul> <h2 id="category">Category:</h2> <ul> <li>UI</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster</li> <li>Create a VM in the default namespace</li> <li>Creating a Namespace (eg: test-vm)</li> <li>Import the Harvester cluster in Rancher</li> <li>access to the harvester cluster from Virtualization Management</li> <li>click Virtual Machines tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>test-vm-1 should also be shown in the list <img src="https://user-images.githubusercontent.com/24985926/181211867-4f3889cd-a14e-463c-9a7f-0aee2d5f358e.png" alt="image"></li> </ol> Auto provision lots of extra disks https://harvester.github.io/tests/manual/_incoming/large-amount-of-extra-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/large-amount-of-extra-disks/ - :warning: This is a heuristic test plan since real world race condition is hard to reproduce. If you find any better alternative, feel free to update. This test is better to perform under QEMU/libvirt environment. Related issues: #1718 [BUG] Automatic disk provisioning result in unusable ghost disks on NVMe drives Category: Storage Verification Steps Create a harvester cluster and attach 10 or more extra disks (needs WWN so that they can be identified uniquely). + <blockquote> <p>:warning: This is a heuristic test plan since real world race condition is hard to reproduce. If you find any better alternative, feel free to update.</p> <p>This test is better to perform under QEMU/libvirt environment.</p> </blockquote> <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1718">#1718</a> [BUG] Automatic disk provisioning result in unusable ghost disks on NVMe drives</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster and attach 10 or more extra disks (needs WWN so that they can be identified uniquely).</li> <li>Add <a href="https://docs.harvesterhci.io/v1.0/settings/settings/#auto-disk-provision-paths-experimental"><code>auto-disk-provision-paths</code></a> setting and provide a value that matches all the disks added from previous step.</li> <li>Wait for minutes for the auto-provisioning process.</li> <li>Eventually, all disks matching the pattern should be partitioned, formatted and mounted successfully.</li> <li>Navigate to longhorn dashboard to see if each disk is successfully added and scheduled.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>A large amout of disks can be auto-provisioned simultaneously.</li> </ol> Boot installer under Legacy BIOS and UEFI https://harvester.github.io/tests/manual/_incoming/2023-boot-installer-legacy-and-uefi/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2023-boot-installer-legacy-and-uefi/ - Related issues #2023 Legacy Iso for older servers Verification Steps BIOS Test Build harvester-installer Boot build artifact using BIOS Legacy mode: qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64 Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo; UEFI Test Build harvester-installer (or use the same one from the BIOS Test, it&rsquo;s a hybrid ISO) Boot build artifact using UEFI mode: qemu-system-x86_64 -m 2048 -cdrom . + <ul> <li>Related issues <a href="https://github.com/harvester/harvester/issues/2023">#2023</a> Legacy Iso for older servers</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="bios-test">BIOS Test</h3> <ol> <li>Build <a href="https://github.com/harvester/harvester-installer">harvester-installer</a></li> <li>Boot build artifact using BIOS Legacy mode: <code>qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64</code></li> <li>Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo;</li> </ol> <h3 id="uefi-test">UEFI Test</h3> <ol> <li>Build <a href="https://github.com/harvester/harvester-installer">harvester-installer</a> (or use the same one from the BIOS Test, it&rsquo;s a hybrid ISO)</li> <li>Boot build artifact using UEFI mode: <code>qemu-system-x86_64 -m 2048 -cdrom ../dist/artifacts/harvester-master-amd64 -bios /usr/share/qemu/ovmf-x86_64.bin</code> (OVMF is a port of the UEFI firmware to qemu)</li> <li>Verify that the installer boot process reaches the screen that says &ldquo;Create New Cluster&rdquo; or &ldquo;Join existing cluster&rdquo;</li> </ol> Check can start VM after Harvester upgrade https://harvester.github.io/tests/manual/_incoming/start-vm-after-harvester-upgrade-complete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/start-vm-after-harvester-upgrade-complete/ - Related issues: #2270 [BUG] Unable start VM after upgraded v1.0.1 to v1.0.2-rc2 Category: Harvester Upgrade Verification Steps Prepare the previous stable Harvester release cluster Create image Enable Network and create VM Create several virtual machine Follow the official document steps to prepare the online or offline upgrade Shutdown all virtual machines Start the upgrade Confirm all the upgrade process complete Start all the virtual machines Expected Results All virtual machine could be correctly started and work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2270">#2270</a> [BUG] Unable start VM after upgraded v1.0.1 to v1.0.2-rc2</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Harvester release cluster</li> <li>Create image</li> <li>Enable Network and create VM</li> <li>Create several virtual machine</li> <li>Follow the <a href="https://docs.harvesterhci.io/v1.0/upgrade/automatic/">official document steps</a> to prepare the online or offline upgrade</li> <li>Shutdown all virtual machines</li> <li>Start the upgrade</li> <li>Confirm all the upgrade process complete</li> <li>Start all the virtual machines</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>All virtual machine could be correctly started and work as expected</li> </ul> Check conditions when stop/pause VM https://harvester.github.io/tests/manual/_incoming/1987-failure-message-in-stopping-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1987-failure-message-in-stopping-vm/ - Related issues: #1987 Verification Steps Stop Request should not have failure message Create a VM with runStrategy: RunStrategyAlways. Stop the VM. Check there is no Failure attempting to delete VMI: &lt;nil&gt; in VM status. UI should not show pause message Create a VM. Pause the VM. Although the message The status of pod readliness gate &quot;kubevirt.io/virtual-machine-unpaused&quot; is not &quot;True&quot;, but False is in the VM condition, UI should not show it. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1987">#1987</a></li> </ul> <h2 id="verification-steps">Verification Steps</h2> <p>Stop Request should not have failure message</p> <ol> <li>Create a VM with <code>runStrategy: RunStrategyAlways</code>.</li> <li>Stop the VM.</li> <li>Check there is no <code>Failure attempting to delete VMI: &lt;nil&gt;</code> in VM status.</li> </ol> <p>UI should not show pause message</p> <ol> <li>Create a VM.</li> <li>Pause the VM.</li> <li>Although the message <code>The status of pod readliness gate &quot;kubevirt.io/virtual-machine-unpaused&quot; is not &quot;True&quot;, but False</code> is in the VM condition, UI should not show it.</li> </ol> Check DNS on install with Github SSH keys https://harvester.github.io/tests/manual/_incoming/1903-dns-github-ssh-keys/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1903-dns-github-ssh-keys/ - Related issues: #1903 DNS server not available during install Verification Steps Without PXE Start a new install Set DNS as 8.8.8.8 Add in github SSH keys Finish install SSH into node with SSH keys from github (rancher@hostname) Verify login was successful With PXE Got vagrant setup from https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Changed settings.yml DHCP config and added dns: 8.8.8.8 dhcp_server: ip: 192.168.0.254 subnet: 192.168.0.0 netmask: 255.255.255.0 range: 192.168.0.50 192.168.0.130 dns: 8.8.8.8 https: false Also changed ssh_authorized_keys and commented out default SSH key and added username for github + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1903">#1903</a> DNS server not available during install</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="without-pxe">Without PXE</h3> <ol> <li>Start a new install</li> <li>Set DNS as <code>8.8.8.8</code></li> <li>Add in github SSH keys</li> <li>Finish install</li> <li>SSH into node with SSH keys from github (<code>rancher@hostname</code>)</li> <li>Verify login was successful</li> </ol> <h3 id="with-pxe">With PXE</h3> <ol> <li>Got vagrant setup from <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Changed <code>settings.yml</code> DHCP config and added <code>dns: 8.8.8.8</code></li> </ol> <pre tabindex="0"><code>dhcp_server: ip: 192.168.0.254 subnet: 192.168.0.0 netmask: 255.255.255.0 range: 192.168.0.50 192.168.0.130 dns: 8.8.8.8 https: false </code></pr Check IPAM configuration with IPAM https://harvester.github.io/tests/manual/_incoming/1697-ipam-load-balancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1697-ipam-load-balancer/ - Related issues: #1697 Optimization for the Harvester load balancer Verification Steps Install the latest rancher and import a Harvester cluster Create a cluster by Harvester node driver Navigate to the workload Page, create a workload Click &ldquo;Add ports&rdquo;, select type as LB, protocol as TCP Check IPAM selector Navigate to the service page, create a LB Click &ldquo;Add-on config&rdquo; tab and check IPAM and port + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1697">#1697</a> Optimization for the Harvester load balancer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install the latest rancher and import a Harvester cluster</li> <li>Create a cluster by Harvester node driver</li> <li>Navigate to the workload Page, create a workload</li> <li>Click &ldquo;Add ports&rdquo;, select type as LB, protocol as TCP</li> <li>Check IPAM selector</li> <li>Navigate to the service page, create a LB</li> <li>Click &ldquo;Add-on config&rdquo; tab and check IPAM and port <img src="https://user-images.githubusercontent.com/83787952/152212105-2b2335be-b12b-42ac-bfcf-aa1d2aeb6fd3.png" alt="image.png"> <img src="https://user-images.githubusercontent.com/83787952/152212109-039a3e23-9eae-4ffc-9318-58f048a112c1.png" alt="image.png"></li> </ol> Check IPv4 static method in ISO installer https://harvester.github.io/tests/manual/_incoming/2796-check-ipv4-static-method-in-iso-installer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2796-check-ipv4-static-method-in-iso-installer/ - Related issues: #2796 [BUG] configure network failed if use static mode Category: Newtork Harvester Installer Verification Steps Use latest ISO to install Enter VLAN field with empty 1 1000 choose static method fill other fields press enter to the next page no error found, and show DNS config page Expected Results During Harvester ISO installer We can configure VLAN network on the static mode with the following settings: No error message blocked Can proceed to dns config page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2796">#2796</a> [BUG] configure network failed if use static mode</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Newtork</li> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Use latest ISO to install</li> <li>Enter VLAN field with <ul> <li>empty</li> <li>1</li> <li>1000</li> </ul> </li> <li>choose static method</li> <li>fill other fields</li> <li>press enter to the next page</li> <li>no error found, and show DNS config page</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>During Harvester ISO installer We can configure VLAN network on the static mode with the following settings:</p> Check logs on Harvester https://harvester.github.io/tests/manual/_incoming/2528-check-logs-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2528-check-logs-harvester/ - Related issues: #2528 [BUG] Tons of AppArmor denied messages Category: Logging Environment Setup This should be run on a Harvester node that has been up for a while and has been in use Verification Steps SSH to harvester node Execute journalctl -b -f Look through logs and verify that there isn&rsquo;t anything generating lots of erroneous messages Expected Results There shouldn&rsquo;t be large volumes of erroneous messages + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2528">#2528</a> [BUG] Tons of AppArmor denied messages</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Logging</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>This should be run on a Harvester node that has been up for a while and has been in use</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH to harvester node</li> <li>Execute <code>journalctl -b -f</code></li> <li>Look through logs and verify that there isn&rsquo;t anything generating lots of erroneous messages</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>There shouldn&rsquo;t be large volumes of erroneous messages</li> </ol> Check Network interface link status can match the available NICs in Harvester vlanconfig https://harvester.github.io/tests/manual/_incoming/2988-check-network-link-match-vlanconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2988-check-network-link-match-vlanconfig/ - Related issues: #2988 [BUG] Network interface link status judgement did not match the available NICs in Harvester vlanconfig Category: Network Verification Steps Create cluster network cn1 Create a vlanconfig config-n1 on cn1 which applied to node 1 only Select an available NIC on the Uplink Create a vlan, the cluster network cn1 vlanconfig and provide valid vlan id 91 Edit config-n1, Check NICs list in Uplink ssh to node 1 + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2988">#2988</a> [BUG] Network interface link status judgement did not match the available NICs in Harvester vlanconfig</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create cluster network <code>cn1</code> <img src="https://user-images.githubusercontent.com/29251855/196580297-57541544-48f5-4492-b3e9-a3450697f490.png" alt="image"></p> </li> <li> <p>Create a vlanconfig <code>config-n1</code> on <code>cn1</code> which applied to node 1 only <img src="https://user-images.githubusercontent.com/29251855/196580491-0572c539-5828-4f2e-a0a6-59b40fcc549b.png" alt="image"></p> </li> <li> <p>Select an available NIC on the Uplink <img src="https://user-images.githubusercontent.com/29251855/196580574-d38d59de-251c-4cf8-885d-655b76a78659.png" alt="image"></p> </li> <li> <p>Create a vlan, the cluster network <code>cn1</code> vlanconfig and provide valid vlan id <code>91</code> <img src="https://user-images.githubusercontent.com/29251855/196584602-b663ca69-da9a-42e3-94e0-41e094ff1d0b.png" alt="image"></p> Check rancher-monitoring-grafana volume size https://harvester.github.io/tests/manual/_incoming/2282-check-rancher-monitoring-grafana-volume-size/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2282-check-rancher-monitoring-grafana-volume-size/ - Related issues: #2282 [BUG] rancher-monitoring-grafana is too small and it keeps growing Category: Monitoring Verification Steps Harvester cluster running after 24 hours Access Harvester Longhorn dashboard via https:///dashboard/c/local/longhorn Open the Longhorn UI Open the volume page Check the rancher-monitoring-grafana size and usage Shutdown a management node machine Power on the management node machine Wait for 60 minutes Check the rancher-monitoring-grafana size and usage in Longhorn UI Shutdown all management node machines in sequence Power on all management node machines in sequence Wait for 60 minutes Check the rancher-monitoring-grafana size and usage in Longhorn UI Expected Results The rancher-monitoring-grafana default allocated with 2Gi and Actual usage 108 Mi after running after 24 hours Turn off then turn on the specific vip harvester node machine, the The rancher-monitoring-grafana keep stable in 107 Mi after turning on 60 minutes Turn off then turn on all four harvester node machines, the The rancher-monitoring-grafana keep stable in 107 Mi after turning on 60 minutes + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2282">#2282</a> [BUG] rancher-monitoring-grafana is too small and it keeps growing</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Monitoring</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Harvester cluster running after 24 hours</li> <li>Access Harvester Longhorn dashboard via https://<!-- raw HTML omitted -->/dashboard/c/local/longhorn</li> <li>Open the Longhorn UI</li> <li>Open the volume page</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage</li> <li>Shutdown a management node machine</li> <li>Power on the management node machine</li> <li>Wait for 60 minutes</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage in Longhorn UI</li> <li>Shutdown all management node machines in sequence</li> <li>Power on all management node machines in sequence</li> <li>Wait for 60 minutes</li> <li>Check the <code>rancher-monitoring-grafana</code> size and usage in Longhorn UI</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The <code>rancher-monitoring-grafana</code> default allocated with <code>2Gi</code> and Actual usage <code>108 Mi</code> after running after 24 hours <img src="https://user-images.githubusercontent.com/29251855/191000121-9c3c640e-7d7f-4d1b-84f6-39745abca0ce.png" alt="image"></p> Check support bundle for SLE Micro OS https://harvester.github.io/tests/manual/_incoming/2420-2464-check-support-bundle-sle-micro-os/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2420-2464-check-support-bundle-sle-micro-os/ - Related issues: #2420 [FEATURE] support bundle: support SLE Micro OS Related issues: #2464 [backport v1.0] [FEATURE] support bundle: support SLE Micro OS Category: Support Bundle Verification Steps Download support bundle in support page Extract the support bundle, check every file have content ssh to harvester node Check the /etc/os-release file content Expected Results Check can download support bundle correctly, check can access every file without empty Checked every harvester nodes, the ID have changed to sle-micro-rancher + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2420">#2420</a> [FEATURE] support bundle: support SLE Micro OS</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2464">#2464</a> [backport v1.0] [FEATURE] support bundle: support SLE Micro OS</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Support Bundle</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download support bundle in support page</li> <li>Extract the support bundle, check every file have content</li> <li>ssh to harvester node</li> <li>Check the /etc/os-release file content</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check can download support bundle correctly, check can access every file without empty</p> Check the OS types in Advanced Options https://harvester.github.io/tests/manual/_incoming/2776-check-os-types-in-advanced-options/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2776-check-os-types-in-advanced-options/ - Related issues: #2776 [FEATURE] remove some dead OS types Category: Network Verification Steps Login harvester dashboard Open the VM create page, check the OS type list Open the image create page, check the OS type list Open the template create page, check the OS type list Expected Results The following OS types should be removed from list Turbolinux Mandriva Xandros In v1.1.0 master we add the SUSE Linux Enterprise in the VM creation page In the image create page In the template create page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2776">#2776</a> [FEATURE] remove some dead OS types</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Open the VM create page, check the OS type list</li> <li>Open the image create page, check the OS type list</li> <li>Open the template create page, check the OS type list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The following OS types should be removed from list</p> <ul> <li>Turbolinux</li> <li>Mandriva</li> <li>Xandros</li> </ul> </li> <li> <p>In v1.1.0 master we add the <code>SUSE Linux Enterprise</code> in the VM creation page <img src="https://user-images.githubusercontent.com/29251855/190973269-764e425f-20be-4cb1-8334-e7af668a7798.png" alt="image"></p> Check the VM is available when Harvester upgrade failed https://harvester.github.io/tests/manual/_incoming/vm-availability-when-harvester-upgrade-failed/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/vm-availability-when-harvester-upgrade-failed/ - Category: Harvester Upgrade Verification Steps Prepare the previous stable Harvester release cluster Create image Enable Network and create VM Create several virtual machine Follow the official document steps to prepare the online or offline upgrade Do not shutdown virtual machine Start the upgrade Check the VM status if the upgrade failed at Preload images, Upgrade Rancher and Upgrade Harvester phase Check the VM status if the upgrade failed at the Pre-drain, Post-drain and RKE2 &amp; OS upgrade phase Expected Results The VM should be work when upgrade failed at Preload images, Upgrade Rancher and Upgrade Harvester phase The VM could not able to function well when upgrade failed at the Pre-drain, Post-drain and RKE2 &amp; OS upgrade phase + <h2 id="category">Category:</h2> <ul> <li>Harvester Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Harvester release cluster</li> <li>Create image</li> <li>Enable Network and create VM</li> <li>Create several virtual machine</li> <li>Follow the <a href="https://docs.harvesterhci.io/v1.0/upgrade/automatic/">official document steps</a> to prepare the online or offline upgrade</li> <li>Do not shutdown virtual machine</li> <li>Start the upgrade</li> <li>Check the VM status if the upgrade failed at <code>Preload images</code>, <code>Upgrade Rancher</code> and <code>Upgrade Harvester</code> phase</li> <li>Check the VM status if the upgrade failed at the <code>Pre-drain</code>, <code>Post-drain</code> and <code>RKE2 &amp; OS upgrade</code> phase</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should be work when upgrade failed at <code>Preload images</code>, <code>Upgrade Rancher</code> and <code>Upgrade Harvester</code> phase</li> <li>The VM could not able to function well when upgrade failed at the <code>Pre-drain</code>, <code>Post-drain</code> and <code>RKE2 &amp; OS upgrade</code> phase</li> </ol> Check version compatibility during an upgrade https://harvester.github.io/tests/manual/_incoming/2431-check-version-compatibility-during-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2431-check-version-compatibility-during-upgrade/ - Related issues: #2431 [FEATURE] Check version compatibility during an upgrade Category: Upgrade Verification Steps Test Plan 1: v1.0.2 upgrade to v1.1.0 with release tag Test Plan 2: v1.0.3 upgrade to v1.1.0 with release tag Test Plan 3: v1.0.2 upgrade to v1.1.0 without release tag Prepare v1.0.2, v1.0.3 Harvester ISO image Prepare v1.1.0 ISO image with release tag Prepare v1.1.0 ISO image without release tag Put different ISO image to HTTP server Create the upgrade yaml to create service cat &lt;&lt;EOF | kubectl apply -f - apiVersion: harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2431">#2431</a> [FEATURE] Check version compatibility during an upgrade</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-v102-upgrade-to-v110-with-release-tag">Test Plan 1: v1.0.2 upgrade to v1.1.0 with release tag</h3> <h3 id="test-plan-2-v103-upgrade-to-v110-with-release-tag">Test Plan 2: v1.0.3 upgrade to v1.1.0 with release tag</h3> <h3 id="test-plan-3-v102-upgrade-to-v110-without-release-tag">Test Plan 3: v1.0.2 upgrade to v1.1.0 without release tag</h3> <ol> <li>Prepare v1.0.2, v1.0.3 Harvester ISO image</li> <li>Prepare v1.1.0 ISO image with release tag</li> <li>Prepare v1.1.0 ISO image without release tag</li> <li>Put different ISO image to HTTP server</li> <li>Create the upgrade yaml to create service <pre tabindex="0"><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: harvesterhci.io/v1beta1 kind: Version metadata: name: v1.1.0 namespace: harvester-system spec: isoURL: &#34;http://192.168.1.110:8000/harvester-eeeb1be-dirty-amd64.iso&#34; EOF </code></pr Check volume status after upgrade https://harvester.github.io/tests/manual/_incoming/2920-volume-status-after-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2920-volume-status-after-upgrade/ - Related issues: #2920 [BUG] Volume can&rsquo;t turn into healthy when upgrading from v1.0.3 to v1.1.0-rc2 Category: Volume Verification Steps Prepare a 4 nodes v1.0.3 Harvester cluster Install several images Create three VMs Enable Network Create vlan1 network Shutdown all VMs Upgrade to v1.1.0-rc3 Check the volume status in Longhorn UI Open K9s, Check the pvc status after upgrade Expected Results Can finish the pre-drain of each node and successfully upgrade to v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2920">#2920</a> [BUG] Volume can&rsquo;t turn into healthy when upgrading from v1.0.3 to v1.1.0-rc2</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a 4 nodes v1.0.3 Harvester cluster</li> <li>Install several images</li> <li>Create three VMs</li> <li>Enable Network</li> <li>Create vlan1 network</li> <li>Shutdown all VMs</li> <li>Upgrade to v1.1.0-rc3</li> <li>Check the volume status in Longhorn UI</li> <li>Open K9s, Check the pvc status after upgrade</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Can finish the pre-drain of each node and successfully upgrade to v1.1.0-rc3 <img src="https://user-images.githubusercontent.com/29251855/196434398-a61b5111-7723-4fa6-ac57-2a68ffef73ee.png" alt="image"></p> Clone image (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2562-clone-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2562-clone-image/ - Related issues: #2562 [[BUG] Image&rsquo;s labels will not be copied when execute Clone Category: Images Verification Steps Install Harvester with any nodes Create a Image via URL Clone the Image and named image-b Check image-b labels in Labels tab Expected Results All labels should be cloned and shown in labels tab + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2562">#2562</a> [[BUG] Image&rsquo;s labels will not be copied when execute Clone</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create a Image via URL</li> <li>Clone the Image and named image-b</li> <li>Check image-b labels in Labels tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All labels should be cloned and shown in labels tab</li> </ol> collect Fleet logs and YAMLs in support bundles https://harvester.github.io/tests/manual/_incoming/2297_collect_fleet_logs_and_yamls_in_support_bundles/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2297_collect_fleet_logs_and_yamls_in_support_bundles/ - Ref: https://github.com/harvester/harvester/issues/2297 Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to support page Click Generate Support Bundle and do Generate log files should be exist in the zipfile of support bundle: logs/cattle-fleet-local-system/fleet-agent-&lt;randomID&gt;/fleet-agent.log logs/cattle-fleet-system/fleet-controller-&lt;randomID&gt;/fleet-controller.log logs/cattle-fleet-system/gitjob-&lt;randomID&gt;/gitjob.log + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2297">https://github.com/harvester/harvester/issues/2297</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to support page</li> <li>Click <strong>Generate Support Bundle</strong> and do Generate</li> <li>log files should be exist in the zipfile of support bundle: <ul> <li><code>logs/cattle-fleet-local-system/fleet-agent-&lt;randomID&gt;/fleet-agent.log</code></li> <li><code>logs/cattle-fleet-system/fleet-controller-&lt;randomID&gt;/fleet-controller.log</code></li> <li><code>logs/cattle-fleet-system/gitjob-&lt;randomID&gt;/gitjob.log</code></li> </ul> </li> </ol> Collect system logs https://harvester.github.io/tests/manual/_incoming/2647_collect_system_logs/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2647_collect_system_logs/ - Ref: https://github.com/harvester/harvester/issues/2647 Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Logging/Event Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Logging Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2647">https://github.com/harvester/harvester/issues/2647</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Logging/Event</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Logging</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Config logging in Harvester Dashboard https://harvester.github.io/tests/manual/_incoming/2646_config_logging_in_harvester_dashboard/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2646_config_logging_in_harvester_dashboard/ - Ref: https://github.com/harvester/harvester/issues/2646 Verify Steps: Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Configurations of Fluentbit and Fluentd should be available in Logging/Configuration + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2646">https://github.com/harvester/harvester/issues/2646</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/191697822-6bc0d7b8-2c56-42e0-805a-408c1ef19845.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/191697860-7ef66c19-cd3e-4e4c-b485-315e7eec771d.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Configurations of Fluentbit and Fluentd should be available in <em>Logging/Configuration</em></li> </ol> Configure VLAN interface on ISO installer UI https://harvester.github.io/tests/manual/_incoming/1647-configure-vlan-interface-on-iso-installer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1647-configure-vlan-interface-on-iso-installer/ - Related issues: #1647 [FEATURE] Support configuring a VLAN at the management interface in the ISO installer UI Category: Network Harvester Installer Environment Setup Prepare a No VLAN network environment Prepare a VLAN network environment Verification Steps Boot Harvester ISO installer Set VLAN id or keep empty Keep installing Check can complete installation Check harvester has network connectivity Test Plan Matrix Create mode No VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip Join mode No VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip VLAN DHCP VIP + DHCP node ip DHCP VIP + Static node ip static VIP + DHCP node ip static VIP + Static node ip Expected Results Check can complete installation Check harvester has network connectivity ip a show dev mgmt-br [VLAN ID] has IP e. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1647">#1647</a> [FEATURE] Support configuring a VLAN at the management interface in the ISO installer UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> <li>Harvester Installer</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare a <code>No</code> VLAN network environment</li> <li>Prepare a <code>VLAN</code> network environment</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Boot Harvester ISO installer</li> <li>Set VLAN id or keep empty</li> <li>Keep installing</li> <li>Check can complete installation</li> <li>Check harvester has network connectivity</li> </ol> <h2 id="test-plan-matrix">Test Plan Matrix</h2> <h3 id="create-mode">Create mode</h3> <h4 id="no-vlan">No VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h4 id="vlan">VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h3 id="join-mode">Join mode</h3> <h4 id="no-vlan-1">No VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h4 id="vlan-1">VLAN</h4> <ol> <li>DHCP VIP + DHCP node ip</li> <li>DHCP VIP + Static node ip</li> <li>static VIP + DHCP node ip</li> <li>static VIP + Static node ip</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check can complete installation</li> <li>Check harvester has network connectivity</li> <li><code>ip a show dev mgmt-br [VLAN ID]</code> has IP</li> <li>e.g ip a show dev mgmt-br.100</li> </ol> Create a harvester-specific StorageClass for Longhorn https://harvester.github.io/tests/manual/_incoming/2692_create_a_harvester-specific_storageclass_for_longhorn/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2692_create_a_harvester-specific_storageclass_for_longhorn/ - Ref: https://github.com/harvester/harvester/issues/2692 Verify Steps: Install Harvester with 2+ nodes Login to Dashboard and create an image for VM creation Navigate to Advanced/Storage Classes, harvester-longhorn and longhorn should be available, and harvester-longhorn should be settled as Default Navigate to Volumes and create vol-old where Storage Class is longhorn and vol-new where Storage Class is harvester-longhorn Create VM vm1 attaching vol-old and vol-new Login to vm1 and use fdisk format volumes and mount to folders: old and new Create file and move into both volumes as following commands: dd if=/dev/zero of=file1 bs=10485760 count=10 cp file1 old &amp;&amp; cp file1 new Migrate vm1 to another host, migration should success Login to vm1, volumes should still attaching to folders old and new Execute command sha256sum on old/file1 and new/file1 should show the same value. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2692">https://github.com/harvester/harvester/issues/2692</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192323716-c863af2a-388f-49d6-8636-d57f8abbad35.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with 2+ nodes</li> <li>Login to Dashboard and create an image for VM creation</li> <li>Navigate to <em>Advanced/Storage Classes</em>, <code>harvester-longhorn</code> and <code>longhorn</code> should be available, and <code>harvester-longhorn</code> should be settled as <strong>Default</strong></li> <li>Navigate to <em>Volumes</em> and create <code>vol-old</code> where Storage Class is <code>longhorn</code> and <code>vol-new</code> where Storage Class is <code>harvester-longhorn</code></li> <li>Create VM <code>vm1</code> attaching <code>vol-old</code> and <code>vol-new</code></li> <li>Login to <code>vm1</code> and use <code>fdisk</code> format volumes and mount to folders: <code>old</code> and <code>new</code></li> <li>Create file and move into both volumes as following commands:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>dd <span style="color:#66d9ef">if</span><span style="color:#f92672">=</span>/dev/zero of<span style="color:#f92672">=</span>file1 bs<span style="color:#f92672">=</span><span style="color:#ae81ff">10485760</span> count<span style="color:#f92672">=</span><span style="color:#ae81ff">10</span> </span></span><span style="display:flex;"><span>cp file1 old <span style="color:#f92672">&amp;&amp;</span> cp file1 new </span></span></code></pr Create multiple VM instances using VM template with EFI mode selected https://harvester.github.io/tests/manual/_incoming/2577-create-multiple-vm-using-template-efi-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2577-create-multiple-vm-using-template-efi-mode/ - Related issues: #2577 [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected. Category: Virtual Machine Verification Steps Create a VM template, check the Booting in EFI mode Create multiple VM instance and use the VM template have Booting in EFI mode checked Wait for all VM running Check the EFI mode is enabled in VM config ssh to each VM Check the /etc/firmware/efi file Expected Results Can create multiple VM instance using VM template with EFI mode selected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2577">#2577</a> [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected.</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM template, check the <code>Booting in EFI mode</code></li> <li>Create multiple VM instance and use the VM template have <code>Booting in EFI mode</code> checked</li> <li>Wait for all VM running</li> <li>Check the EFI mode is enabled in VM config</li> <li>ssh to each VM</li> <li>Check the /etc/firmware/efi file</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Can create multiple VM instance using VM template with EFI mode selected</p> Dashboard Storage usage display when node disk have warning https://harvester.github.io/tests/manual/_incoming/2622-dashboard-storage-usage-display-when-node-disk-have-warning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2622-dashboard-storage-usage-display-when-node-disk-have-warning/ - Related issues: #2622 [BUG] Dashboard Storage used is wrong when a node disk is warning Category: Storage Verification Steps Login harvester dashboard Access Longhorn UI from url https://192.168.122.136/dashboard/c/local/longhorn Go to Node page Click edit node and disks Select disabling Node scheduling Select disabling storage scheduling on the bottom Open Longhorn dashboard page, check the Storage Schedulable Open Harvester dashboard page, check the used and scheduled storage size Expected Results After disabling the node and storage scheduling on Longhorn UI. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2622">#2622</a> [BUG] Dashboard Storage used is wrong when a node disk is warning</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Access Longhorn UI from url https://192.168.122.136/dashboard/c/local/longhorn</li> <li>Go to Node page</li> <li>Click edit node and disks</li> <li>Select disabling Node scheduling <img src="https://user-images.githubusercontent.com/29251855/187578343-653d0235-92a9-4979-aae0-b62b606df525.png" alt="image"></li> <li>Select disabling storage scheduling on the bottom <img src="https://user-images.githubusercontent.com/29251855/187578175-326b5909-cd6a-4e31-a1cf-92df5e619a5c.png" alt="image"></li> <li>Open Longhorn dashboard page, check the Storage Schedulable</li> <li>Open Harvester dashboard page, check the used and scheduled storage size</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>After disabling the node and storage scheduling on Longhorn UI.</p> Dedicated storage network https://harvester.github.io/tests/manual/_incoming/1055_dedicated_storage_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1055_dedicated_storage_network/ - Ref: https://github.com/harvester/harvester/issues/1055 Verified this feature has been implemented partially. Mentioned problem in https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519 will be introduced as a enhancement in #2995 Test Information Environment: baremetal DL360G9 5 nodes Harvester Version: master-bd1d49a9-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Navigate to Networks -&gt; Cluster Networks/Configs, create Cluster Network named vlan Navigate to Advanced -&gt; Settings, edit storage-network Select Enable then select vlan as cluster network, fill in VLAN ID and IP Range Click Save, warning or error message should displayed. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1055">https://github.com/harvester/harvester/issues/1055</a></p> <p>Verified this feature has been implemented partially. Mentioned problem in <a href="https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519">https://github.com/harvester/harvester/issues/1055#issuecomment-1283754519</a> will be introduced as a enhancement in #2995</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>baremetal DL360G9 5 nodes</strong></li> <li>Harvester Version: <strong>master-bd1d49a9-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan</code></li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Click Save, warning or error message should displayed.</li> <li>edit <code>storage-network</code> again, <code>mgmt</code> should not in the drop-down list of <code>Cluster Network</code></li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, create Cluster Network named <code>vlan2</code></li> <li>Create <code>Network Config</code> for all nodes</li> <li>Navigate to <em>Advanced -&gt; Settings</em>, edit <code>storage-network</code></li> <li>Select <code>Enable</code> then select <code>vlan2</code> as cluster network, fill in <strong>VLAN ID</strong> and <strong>IP Range</strong></li> <li>Navigate to <em>Networks -&gt; Cluster Networks/Configs</em>, delete Cluster Network <code>vlan2</code></li> <li>Warning or error message should displayed</li> </ol> Delete VM template default version (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2376-2379-delete-vm-template-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2376-2379-delete-vm-template-default-version/ - Related issues: #2376 [BUG] Cannot delete Template Related issues: #2379 [backport v1.0.3] Cannot delete Template Category: VM Template Verification Steps Go to Advanced -&gt; Templates Create a new template Modify the template to create a new version Click the config button of the default version template Click the config button of the non default version template Expected Results If the template is the default version, it will not display the delete button If the template is not the default version, it will display the delete button We can also delete the entire template from the config button + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2376">#2376</a> [BUG] Cannot delete Template</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2379">#2379</a> [backport v1.0.3] Cannot delete Template</li> </ul> <h2 id="category">Category:</h2> <ul> <li>VM Template</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to Advanced -&gt; Templates</li> <li>Create a new template</li> <li>Modify the template to create a new version</li> <li>Click the config button of the <code>default version</code> template</li> <li>Click the config button of the <code>non default version</code> template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>If the template is the <code>default version</code>, it will not display the <code>delete</code> button <img src="https://user-images.githubusercontent.com/29251855/174030567-b2c6ae52-40d1-4dd6-9ede-783409bd3c87.png" alt="image"></p> Deny the vlanconfigs overlap with the other https://harvester.github.io/tests/manual/_incoming/2828-deny-vlanconfig-overlap-others/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2828-deny-vlanconfig-overlap-others/ - Related issues: #2828 [BUG][FEATURE] Deny the vlanconfigs overlap with the other Category: Network Verification Steps Prepare a 3 nodes Harvester on local kvm Each VM have five NICs attached. Create a cluster network cn1 Create a vlanconfig config-all which applied to all nodes Set one of the NIC On the same cluster network, create another vlan network config-one which applied to only node 1 Provide another NIC Click the create button Expected Results Under the same Cluster Network: + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2828">#2828</a> [BUG][FEATURE] Deny the vlanconfigs overlap with the other</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a 3 nodes Harvester on local kvm</li> <li>Each VM have five NICs attached.</li> <li>Create a cluster network <code>cn1</code></li> <li>Create a vlanconfig <code>config-all</code> which applied to <code>all nodes</code> <img src="https://user-images.githubusercontent.com/29251855/196409238-dd1a5d9f-bf00-46cd-93b2-c9469bf7c58a.png" alt="image"></li> <li>Set one of the NIC <img src="https://user-images.githubusercontent.com/29251855/196409451-5279f4e5-e66a-4960-8889-cc1c186acfdc.png" alt="image"></li> <li>On the same cluster network, create another vlan network <code>config-one</code> which applied to only <code>node 1</code> <img src="https://user-images.githubusercontent.com/29251855/196409565-67e2e418-1efc-4c50-a016-7fea4dd582a3.png" alt="image"></li> <li>Provide another NIC <img src="https://user-images.githubusercontent.com/29251855/196409613-e214183d-b665-453e-8fa8-246f21a11243.png" alt="image"></li> <li>Click the create button</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Under the same Cluster Network:</p> Deploy guest cluster to specific node with Node selector label https://harvester.github.io/tests/manual/_incoming/2316-2384-deploy-guest-cluster-node-selector-label-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2316-2384-deploy-guest-cluster-node-selector-label-copy/ - Related issues: #2316 [BUG] Guest cluster nodes distributed across failure domain Related issues: #2384 [backport v1.0.3] Guest cluster nodes distributed across failure domains Category: Rancher integration Verification Steps RKE2 Verification Steps Open Harvester Host page then edit host config Add the following key value in the labels page: topology.kubernetes.io/zone: zone_bp topology.kubernetes.io/region: region_bp Open the RKE2 provisioning page Expand the show advanced Click add Node selector in Node scheduling Use default Required priority Click Add Rule Provide the following key/value pairs topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2316">#2316</a> [BUG] Guest cluster nodes distributed across failure domain</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2384">#2384</a> [backport v1.0.3] Guest cluster nodes distributed across failure domains</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="rke2-verification-steps">RKE2 Verification Steps</h3> <ol> <li>Open Harvester Host page then edit host config</li> <li>Add the following key value in the labels page: <ul> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp <img src="https://user-images.githubusercontent.com/29251855/179735384-77e99870-92ad-41c2-b414-a872130c0b27.png" alt="image"></li> </ul> </li> <li>Open the RKE2 provisioning page</li> <li>Expand the show advanced</li> <li>Click add Node selector in <code>Node scheduling</code></li> <li>Use default <code>Required</code> priority</li> <li>Click Add Rule</li> <li>Provide the following key/value pairs <ul> <li><code>topology.kubernetes.io/zone: zone_bp</code></li> <li><code>topology.kubernetes.io/region: region_bp</code> <img src="https://user-images.githubusercontent.com/29251855/179736419-78612fd1-9990-44d8-b9be-d9a850bd27a0.png" alt="image"></li> </ul> </li> <li>Provide the following user data <pre tabindex="0"><code>password: 123456 chpasswd: { expire: False } ssh_pwauth: True </code></pr Download backing images https://harvester.github.io/tests/manual/_incoming/1436__allowing_users_to_download_backing_images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1436__allowing_users_to_download_backing_images/ - Ref: https://github.com/harvester/harvester/issues/1436 Verify Steps: Install Harvester with any nodes Create a Image img1 Click the details of img1, Download Button should be available Click Download button, img1 should able to be downloaded and downloaded successfully. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1436">https://github.com/harvester/harvester/issues/1436</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/189675005-a7509189-f0c3-42e4-b5a4-d8c1bc1f6341.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create a Image <code>img1</code></li> <li>Click the details of <code>img1</code>, <strong>Download</strong> Button should be available</li> <li>Click <strong>Download</strong> button, <code>img1</code> should able to be downloaded and downloaded successfully.</li> </ol> enable/disable alertmanager on demand https://harvester.github.io/tests/manual/_incoming/2518_enabledisable_alertmanager_on_demand/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2518_enabledisable_alertmanager_on_demand/ - Ref: https://github.com/harvester/harvester/issues/2518 Verify Steps: Install Harvester with any nodes Login to Dashboard, navigate to Monitoring &amp; Logging/Monitoring/Configuration then select Alertmanager tab Option Button Enabled should be checked Select Grafana tab then access Grafana Search Alertmanager to access Overview dashboard Data should be available and keep updating + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2518">https://github.com/harvester/harvester/issues/2518</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/193554680-c2d6f7c0-5cf0-44ee-803e-c7abda408774.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/193554761-1f28c3b9-8964-4bfa-8069-d5bcc7d8d837.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard, navigate to <strong>Monitoring &amp; Logging/Monitoring/Configuration</strong> then select <strong>Alertmanager</strong> tab</li> <li>Option Button <code>Enabled</code> should be checked</li> <li>Select <strong>Grafana</strong> tab then access Grafana</li> <li>Search <em>Alertmanager</em> to access <em>Overview</em> dashboard</li> <li>Data should be available and keep updating</li> </ol> Enabling and Tuning KSM https://harvester.github.io/tests/manual/_incoming/2302_enabling_and_tuning_ksm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2302_enabling_and_tuning_ksm/ - Ref: https://github.com/harvester/harvester/issues/2302 Verify Steps: Install Harvester with any nodes Login to Dashboard and Navigate to hosts Edit node1&rsquo;s Ksmtuned to Run and ThresCoef to 85 then Click Save Login to node1&rsquo;s console, execute kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt; Fields in spec should be the same as Dashboard configured Create an image for VM creation Create multiple VMs with 2Gi+ memory and schedule on &lt;node1&gt; (memory size reflect to &rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%) Execute watch -n1 grep . + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2302">https://github.com/harvester/harvester/issues/2302</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard and Navigate to hosts</li> <li>Edit <em>node1</em>&rsquo;s <strong>Ksmtuned</strong> to <code>Run</code> and <strong>ThresCoef</strong> to <code>85</code> then Click <strong>Save</strong></li> <li>Login to <em>node1</em>&rsquo;s console, execute <code>kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt;</code></li> <li>Fields in <code>spec</code> should be the same as Dashboard configured</li> <li>Create an image for VM creation</li> <li>Create multiple VMs with 2Gi+ memory and schedule on <code>&lt;node1&gt;</code> (memory size reflect to <!-- raw HTML omitted -->&rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%)</li> <li>Execute <code>watch -n1 grep . /sys/kernel/mm/ksm/*</code> to monitor ksm&rsquo;s status change <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>1</code> after VMs started</li> <li><code>/sys/kernel/mm/ksm/page_*</code> should updating continuously</li> </ul> </li> <li>Login to Dashboard then navigate to <em>Hosts</em>, click <!-- raw HTML omitted --></li> <li>In the Tab of <strong>Ksmtuned</strong>, values in Statistics section should not be <code>0</code>. (data in this section will be updated per min, so it not equals to console&rsquo;s output was expected.)</li> <li>Stop all VMs scheduling to <code>&lt;node1&gt;</code>, the monitor data <code>/sys/kernel/mm/ksm/run</code> should be update to <code>0</code> (this is expected as it is designed to dynamically spawn ksm up when <code>ThresCoef</code> hits)</li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to <code>Run: Prune</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>2</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be update to <code>0</code></li> </ul> </li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to <code>Run: Stop</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>0</code></li> </ul> </li> </ol> enhance double check of VM's resource modification https://harvester.github.io/tests/manual/_incoming/2869_enhance_double_check_of_vms_resource_modification/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2869_enhance_double_check_of_vms_resource_modification/ - Ref: https://github.com/harvester/harvester/issues/2869 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create VM vm1 Imitate video recording (as below) to test https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2869">https://github.com/harvester/harvester/issues/2869</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create VM <code>vm1</code></li> <li>Imitate video recording (as below) to test</li> </ol> <p><a href="https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4">https://user-images.githubusercontent.com/5169694/193790263-19379641-e282-445f-831f-8da039c15e77.mp4</a></p> enhance node scheduling when vm selects network https://harvester.github.io/tests/manual/_incoming/2982_enhance_node_scheduling_when_vm_selects_network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2982_enhance_node_scheduling_when_vm_selects_network/ - Ref: https://github.com/harvester/harvester/issues/2982 Criteria Scheduling rule added automatically when select specific network Verify Steps: go to Cluster Networks / Config page, create a new Cluster Network (eg: test) Create a new network config in the test Cluster Network. (Select a specific node) go to Network page to create a new network (e.g: test-untagged), select UntaggedNetwork type and select test cluster network. click Create button go to VM create page, fill all required value, Click Networks tab, select default/test-untagged network, click Create button The VM is successfully created, but the scheduled node may not match the Network Config ! + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2982">https://github.com/harvester/harvester/issues/2982</a></p> <h3 id="criteria">Criteria</h3> <p>Scheduling rule added automatically when select specific network <img src="https://user-images.githubusercontent.com/5169694/197729616-a6fcda2e-42ba-469f-b6c1-9c297bef1a45.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>go to <code>Cluster Networks / Config</code> page, create a new Cluster Network (eg: test)</li> <li>Create a new <code>network config</code> in the <code>test</code> Cluster Network. (Select a specific node) <img src="https://images.zenhubusercontent.com/60345555ec1db310c78aa2b8/431ba9b2-56e7-48af-bf4d-6e0ba964ebd3" alt="image.png"></li> <li>go to <code>Network</code> page</li> <li>to create a new network (e.g: <code>test-untagged</code>), select <code>UntaggedNetwork</code> type and select <code>test</code> cluster network. click <code>Create</code> button</li> <li>go to VM create page, fill all required value, Click <code>Networks</code> tab, select <code>default/test-untagged</code> network, click <code>Create</code> button</li> <li>The VM is successfully created, but the scheduled node may not match the Network Config ![image.png]</li> </ol> Function keys on web VNC interface https://harvester.github.io/tests/manual/_incoming/1461-function-keys-on-web-vnc-interface/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1461-function-keys-on-web-vnc-interface/ - Related issues: #1461 [UI] F keys and Alt-F keys in web VNC interface Category: Network Verification Steps Create a new VM with Ubuntu desktop 20.04 Prepare two volume Complete the installation process Open a web browser on Ubuntu desktop Check the shortcut keys combination Expected Results Check the soft shortcut keys can display and work correctly on Linux OS VM (Ubuntu desktop 20.04) Checked the following short cut can work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1461">#1461</a> [UI] F keys and Alt-F keys in web VNC interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new VM with Ubuntu desktop 20.04</li> <li>Prepare two volume</li> <li>Complete the installation process</li> <li>Open a web browser on Ubuntu desktop</li> <li>Check the shortcut keys combination</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check the soft shortcut keys can display and work correctly on Linux OS VM (Ubuntu desktop 20.04) <img src="https://user-images.githubusercontent.com/29251855/177092853-0a9d570e-39b1-4127-ac22-2b9508d5b4f6.png" alt="image"></p> Generate Install Support Config Bundle For Single Node https://harvester.github.io/tests/manual/_incoming/1864-generate-install-support-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1864-generate-install-support-config/ - Related issue: #1864 Support bundle for a single node (Live/Installed) Related issue: #272 Generate supportconfig for failed installations Category: Support Environment setup Setup a single node harvester from ISO install but don&rsquo;t complete the installation Gain SSH Access to the Single Harvester Node Once Shelled into the Single Harvester Node edit the /usr/sbin/harv-install Using: harvester-installer&rsquo;s harv-install as a reference edit around line #362 adding exit 1: exit 1 trap cleanup exit check_iso save the file. + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1864">#1864</a> Support bundle for a single node (Live/Installed)</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester-installer/pull/272">#272</a> Generate supportconfig for failed installations</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Support</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup a single node harvester from ISO install but don&rsquo;t complete the installation</p> <ol> <li>Gain SSH Access to the Single Harvester Node</li> <li>Once Shelled into the Single Harvester Node edit the <code>/usr/sbin/harv-install</code></li> <li>Using: <a href="https://github.com/harvester/harvester-installer/blob/master/package/harvester-os/files/usr/sbin/harv-install#L362">harvester-installer&rsquo;s harv-install as a reference</a> edit around line #362 adding <code>exit 1</code>:</li> </ol> <pre tabindex="0"><code>exit 1 trap cleanup exit check_iso </code></pr Harvester Cloud Provider compatibility check https://harvester.github.io/tests/manual/_incoming/2753-harvester-cloud-provider-compatibility/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2753-harvester-cloud-provider-compatibility/ - Related issues: #2753 [FEATURE] Harvester Cloud Provider compatibility check enhancement Category: Rancher Integration Verification Steps Open Rancher Global settings Edit the rke-metadata-config Change the default url to https://harvester-dev.oss-cn-hangzhou.aliyuncs.com/Untitled-1.json which include the following cloud provider and csi-driver chart changes &#34;charts&#34;: { &#34;harvester-cloud-provider&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, &#34;harvester-csi-driver&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, Save and reload page Open the create RKE2 cluster page Select the incomparable RKE2 version Check the Cloud provider drop down Enable Harvester API in Preference -&gt; Enable Developer Tools &amp; Features Open settings Click view API of any setting Click up open the id&quot;: &ldquo;harvester-csi-ccm-versions&rdquo; Or directly access https://192. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2753">#2753</a> [FEATURE] Harvester Cloud Provider compatibility check enhancement</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Rancher Global settings</li> <li>Edit the <code>rke-metadata-config</code></li> <li>Change the default url to <code>https://harvester-dev.oss-cn-hangzhou.aliyuncs.com/Untitled-1.json</code> which include the following cloud provider and csi-driver chart changes <pre tabindex="0"><code>&#34;charts&#34;: { &#34;harvester-cloud-provider&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, &#34;harvester-csi-driver&#34;: { &#34;repo&#34;: &#34;rancher-rke2-charts&#34;, &#34;version&#34;: &#34;1.1.0&#34; }, </code></pr Harvester pull Rancher agent image from private registry https://harvester.github.io/tests/manual/_incoming/2175-2332-harvester-pull-rancher-image-private-registry/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2175-2332-harvester-pull-rancher-image-private-registry/ - Related issues: #2175 [BUG] Harvester fails to pull Rancher agent image from private registry Related issues: #2332 [Backport v1.0] Harvester fails to pull Rancher agent image from private registry Category: Virtual Machine Verification Steps Create a harvester cluster and a ubuntu server. Make sure they can reach each other. On each harvester node, add ubuntu IP to /etc/hosts. # vim /etc/hosts &lt;host ip&gt; myregistry.local On the ubuntu server, install docker and run the following commands. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2175">#2175</a> [BUG] Harvester fails to pull Rancher agent image from private registry</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2332">#2332</a> [Backport v1.0] Harvester fails to pull Rancher agent image from private registry</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a harvester cluster and a ubuntu server. Make sure they can reach each other.</li> <li>On each harvester node, add ubuntu IP to <code>/etc/hosts</code>.</li> </ol> <pre tabindex="0"><code># vim /etc/hosts &lt;host ip&gt; myregistry.local </code></pr Harvester rebase check on SLE Micro https://harvester.github.io/tests/manual/_incoming/1933-2420-harvester-rebase-check-on-sle-micro/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1933-2420-harvester-rebase-check-on-sle-micro/ - Related issues: #1933 [FEATURE] Rebase Harvester on SLE Micro for Rancher Related issues: #2420 [FEATURE] support bundle: support SLE Micro OS Category: System Verification Steps Download support bundle in support page Extract support bundle and check every file content Vagrant install master release Execute backend E2E regression test Run frontend Cypress automated test against feature Images, Networks, Virtual machines Run manual test against feature Volume, Live migration and Backup and rancher integration Expected Results Check can download support bundle correctly, check can access every file without empty + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1933">#1933</a> [FEATURE] Rebase Harvester on SLE Micro for Rancher</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2420">#2420</a> [FEATURE] support bundle: support SLE Micro OS</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>System</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download support bundle in support page</li> <li>Extract support bundle and check every file content</li> <li>Vagrant install master release</li> <li>Execute backend E2E regression test</li> <li>Run frontend Cypress automated test against feature Images, Networks, Virtual machines</li> <li>Run manual test against feature Volume, Live migration and Backup and rancher integration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Check can download support bundle correctly, check can access every file without empty</p> Harvester supports event log https://harvester.github.io/tests/manual/_incoming/2748_harvester_supports_event_log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2748_harvester_supports_event_log/ - Ref: https://github.com/harvester/harvester/issues/2748 Verified this feature has been implemented. Test Information Environment: qemu/KVM 3 nodes Harvester Version: master-250f41e4-head ui-source Option: Auto Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Logging/Event Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Event Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2748">https://github.com/harvester/harvester/issues/2748</a></p> <p>Verified this feature has been implemented.</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 3 nodes</strong></li> <li>Harvester Version: <strong>master-250f41e4-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Logging/Event</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Event</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Harvester supports kube-audit log https://harvester.github.io/tests/manual/_incoming/2747_harvester_supports_kube-audit_log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2747_harvester_supports_kube-audit_log/ - Ref: https://github.com/harvester/harvester/issues/2747 Verify Steps: Install Graylog via docker[^1] Install Harvester with any nodes Login to Dashboard then navigate to Monitoring &amp; Logging/Logging Create Cluster Output with following: Name: gelf-evts Type: Audit Only Output: GELF Target: &lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt; Create Cluster Flow with following: Name: gelf-flow Type of Matches: Audit Cluster Outputs: gelf-evts Create an Image for VM creation Create a vm vm1 and start it Login to Graylog dashboard then navigate to search Select update frequency New logs should be posted continuously. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2747">https://github.com/harvester/harvester/issues/2747</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install <em>Graylog</em> via docker[^1]</li> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to <em>Monitoring &amp; Logging/Logging</em></li> <li>Create <strong>Cluster Output</strong> with following: <ul> <li><strong>Name</strong>: gelf-evts</li> <li><strong>Type</strong>: <code>Audit Only</code></li> <li><strong>Output</strong>: GELF</li> <li><strong>Target</strong>: <code>&lt;Graylog_IP&gt;, &lt;Graylog_Port&gt;, &lt;UDP&gt;</code></li> </ul> </li> <li>Create <strong>Cluster Flow</strong> with following: <ul> <li><strong>Name</strong>: gelf-flow</li> <li><strong>Type</strong> of Matches: <code>Audit</code></li> <li><strong>Cluster Outputs</strong>: <code>gelf-evts</code></li> </ul> </li> <li>Create an Image for VM creation</li> <li>Create a vm <code>vm1</code> and start it</li> <li>Login to <code>Graylog</code> dashboard then navigate to search</li> <li>Select update frequency <img src="https://user-images.githubusercontent.com/5169694/191725169-d1203674-13d8-487b-9fa2-e1d9394fa5c0.png" alt="image"></li> <li>New logs should be posted continuously.</li> </ol> <h3 id="code-snippets-to-setup-graylog">code snippets to setup Graylog</h3> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>docker run --name mongo -d mongo:4.2.22-rc0 </span></span><span style="display:flex;"><span>sysctl -w vm.max_map_count<span style="color:#f92672">=</span><span style="color:#ae81ff">262145</span> </span></span><span style="display:flex;"><span>docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e xpack.security.enabled<span style="color:#f92672">=</span>false -e node.name<span style="color:#f92672">=</span>es01 -it docker.elastic.co/elasticsearch/elasticsearch:6.8.23 </span></span><span style="display:flex;"><span>docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 -p 12202:12202 -p 12202:12202/udp -e GRAYLOG_PASSWORD_SECRET<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Graypass3WordMor!e&#34;</span> -e GRAYLOG_ROOT_PASSWORD_SHA2<span style="color:#f92672">=</span>899e9793de44cbb14f48b4fce810de122093d03705c0971752a5c15b0fa1ae03 -e GRAYLOG_HTTP_EXTERNAL_URI<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://127.0.0.1:9000/&#34;</span> -d graylog/graylog:4.3.5 </span></span></code></pr Harvester uses active-backup as the default bond mode https://harvester.github.io/tests/manual/_incoming/2472_harvester_uses_active-backup_as_the_default_bond_mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2472_harvester_uses_active-backup_as_the_default_bond_mode/ - Ref: https://github.com/harvester/harvester/issues/2472 Verify Steps: Install Harvester via ISO The default Bond Mode should select active-backup Ater installed with active-backup mode, login to console Execute cat /etc/sysconfig/network/ifcfg-harvester-mgmt, BONDING_MODULE_OPTS should contains mode=active-backup + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2472">https://github.com/harvester/harvester/issues/2472</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/184838334-a723f066-8eef-4cbc-ab66-6e02b758823d.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/184839241-3702fa7c-950e-4b51-8c18-d29d4121f848.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester via ISO</li> <li>The default <strong>Bond Mode</strong> should select <code>active-backup</code></li> <li>Ater installed with <code>active-backup</code> mode, login to console</li> <li>Execute <code>cat /etc/sysconfig/network/ifcfg-harvester-mgmt</code>, <strong>BONDING_MODULE_OPTS</strong> should contains <code>mode=active-backup</code></li> </ol> Image filtering by labels https://harvester.github.io/tests/manual/_incoming/2319-image-filtering-by-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2319-image-filtering-by-labels/ - Related issues: #2319 [FEATURE] Image filtering by labels Category: Image Verification Steps Upload several images and add related label Go to the image list page Add filter according to test plan 1 Go to VM creation page Check the image list and search by name Import Harvester in Rancher Go to cluster management page Create a RKE2 cluster Check the image list and search by name Expected Results Test Result 1: The image list page can be filtered by label in the following cases + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2319">#2319</a> [FEATURE] Image filtering by labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload several images and add related label</li> <li>Go to the image list page</li> <li>Add filter according to test plan 1</li> <li>Go to VM creation page</li> <li>Check the image list and search by name</li> <li>Import Harvester in Rancher</li> <li>Go to cluster management page</li> <li>Create a RKE2 cluster</li> <li>Check the image list and search by name</li> </ol> <h2 id="expected-results">Expected Results</h2> <h4 id="test-result-1">Test Result 1:</h4> <p>The image list page can be filtered by label in the following cases</p> Image filtering by labels (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2474-image-filtering-by-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2474-image-filtering-by-labels/ - Related issues: #2474 [backport v1.0] [FEATURE] Image filtering by labels Category: Image Verification Steps Upload several images and add related label Go to the image list page Add filter according to test plan 1 Go to VM creation page Check the image list and search by name Import Harvester in Rancher Go to cluster management page Create a RKE2 cluster Check the image list and search by name Expected Results The image list page can be filtered by label in the following cases + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2474">#2474</a> [backport v1.0] [FEATURE] Image filtering by labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload several images and add related label</li> <li>Go to the image list page</li> <li>Add filter according to test plan 1</li> <li>Go to VM creation page</li> <li>Check the image list and search by name</li> <li>Import Harvester in Rancher</li> <li>Go to cluster management page</li> <li>Create a RKE2 cluster</li> <li>Check the image list and search by name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The image list page can be filtered by label in the following cases</p> Image handling consistency between terraform data resource and Harvester UI created image https://harvester.github.io/tests/manual/_incoming/2443-image-consistency-terraform-data-harvester-ui/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2443-image-consistency-terraform-data-harvester-ui/ - Related issues: #2443 [BUG] Image handling inconsistency between &ldquo;Harvester Terraform harvester_image data source&rdquo; vs. &ldquo;UI created Image&rdquo; Category: Terraform Verification Steps Download latest terraform-provider terraform-provider-harvester_0.5.1_linux_amd64.zip Extra the zip file Create the install-terraform-provider-harvester.sh with the following content #!/usr/bin/env bash [[ -n $DEBUG ]] &amp;&amp; set -x set -eou pipefail usage() { cat &lt;&lt;HELP USAGE: install-terraform-provider-harvester.sh HELP } version=0.5.1 arch=linux_amd64 terraform_harvester_provider_bin=./terraform-provider-harvester terraform_harvester_provider_dir=&#34;${HOME}/.terraform.d/plugins/registry.terraform.io/harvester/harvester/${version}/${arch}/&#34; mkdir -p &#34;${terraform_harvester_provider_dir}&#34; cp ${terraform_harvester_provider_bin} &#34;${terraform_harvester_provider_dir}/terraform-provider-harvester_v${version}&#34; Rename the extraced terraform-provider-harvester_v0. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2443">#2443</a> [BUG] Image handling inconsistency between &ldquo;Harvester Terraform harvester_image data source&rdquo; vs. &ldquo;UI created Image&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraform</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Download latest terraform-provider <a href="https://github.com/harvester/terraform-provider-harvester/releases/download/v0.5.1/terraform-provider-harvester_0.5.1_linux_amd64.zip">terraform-provider-harvester_0.5.1_linux_amd64.zip</a></p> </li> <li> <p>Extra the zip file</p> </li> <li> <p>Create the install-terraform-provider-harvester.sh with the following content</p> <pre tabindex="0"><code>#!/usr/bin/env bash [[ -n $DEBUG ]] &amp;&amp; set -x set -eou pipefail usage() { cat &lt;&lt;HELP USAGE: install-terraform-provider-harvester.sh HELP } version=0.5.1 arch=linux_amd64 terraform_harvester_provider_bin=./terraform-provider-harvester terraform_harvester_provider_dir=&#34;${HOME}/.terraform.d/plugins/registry.terraform.io/harvester/harvester/${version}/${arch}/&#34; mkdir -p &#34;${terraform_harvester_provider_dir}&#34; cp ${terraform_harvester_provider_bin} &#34;${terraform_harvester_provider_dir}/terraform-provider-harvester_v${version}&#34; </code></pr Image naming with inline CSS (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2563-image-naming-inline-css/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2563-image-naming-inline-css/ - Related issues: #2563 [[BUG] harvesterhci.io.virtualmachineimage spec.displayName displays differently in single view of image Category: Images Verification Steps Go to images Click &ldquo;Create&rdquo; Upload an image or leverage an url - but name the image something like: &lt;strong&gt;&lt;em&gt;something_interesting&lt;/em&gt;&lt;/strong&gt; Wait for upload to complete. Observe the display name within the list of images Compare that to clicking into the single image and viewing it Expected Results The list view naming would be the same as the single view of the image + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2563">#2563</a> [[BUG] harvesterhci.io.virtualmachineimage spec.displayName displays differently in single view of image</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to images</li> <li>Click &ldquo;Create&rdquo;</li> <li>Upload an image or leverage an url - but name the image something like: <code>&lt;strong&gt;&lt;em&gt;something_interesting&lt;/em&gt;&lt;/strong&gt;</code></li> <li>Wait for upload to complete.</li> <li>Observe the display name within the list of images</li> <li>Compare that to clicking into the single image and viewing it</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The list view naming would be the same as the single view of the image</li> </ol> Image upload does not start when HTTP Proxy is configured https://harvester.github.io/tests/manual/_incoming/2436-2524-image-upload-failed-when-http-proxy-configured/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2436-2524-image-upload-failed-when-http-proxy-configured/ - Related issues: #2436 [BUG] Image upload does not start when HTTP Proxy is configured Related issues: #2524 [backport v1.0] [BUG] Image upload does not start when HTTP Proxy is configured Category: Image Verification Steps Clone ipxe-example vagrant project https://github.com/harvester/ipxe-examples Edit settings.yml Set harvester_network_config.offline=true Create a one node air gapped Harvester with a HTTP proxy server Access Harvester settings page Add the following http proxy configuration { &#34;httpProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;httpsProxy&#34;: &#34;http://192. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2436">#2436</a> [BUG] Image upload does not start when HTTP Proxy is configured</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2524">#2524</a> [backport v1.0] [BUG] Image upload does not start when HTTP Proxy is configured</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Clone ipxe-example vagrant project <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit settings.yml</li> <li>Set <code>harvester_network_config.offline=true</code></li> <li>Create a one node air gapped Harvester with a HTTP proxy server</li> <li>Access Harvester settings page</li> <li>Add the following http proxy configuration</li> </ol> <pre tabindex="0"><code>{ &#34;httpProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;httpsProxy&#34;: &#34;http://192.168.0.254:3128&#34;, &#34;noProxy&#34;: &#34;localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.0.0/16,.svc,.cluster.local,example.com&#34; } </code></pr Improved resource reservation https://harvester.github.io/tests/manual/_incoming/2347_improved_resource_reservation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2347_improved_resource_reservation/ - Ref: https://github.com/harvester/harvester/issues/2347, https://github.com/harvester/harvester/issues/1700 Test Information Environment: Baremetal DL160G9 5 nodes Harvester Version: master-96b90714-head ui-source Option: Auto Verify Steps: Install Harvester with any nodes Login and Navigate to Hosts CPU/Memory/Storage should display Reserved and Used percentage. Navigate to Host&rsquo;s details Monitor Data should display Reserved and Used percentage, and should equals to the value in Hosts. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2347">https://github.com/harvester/harvester/issues/2347</a>, <a href="https://github.com/harvester/harvester/issues/1700">https://github.com/harvester/harvester/issues/1700</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/174753699-f65e66c6-677b-4a3a-8f71-bfbb7a3b1bb2.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/174754418-c5786f38-5909-40ce-8076-c3eddcd3059a.png" alt="image"></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>Baremetal DL160G9 5 nodes</strong></li> <li>Harvester Version: <strong>master-96b90714-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login and Navigate to Hosts</li> <li>CPU/Memory/Storage should display <strong>Reserved</strong> and <strong>Used</strong> percentage.</li> <li>Navigate to Host&rsquo;s details</li> <li>Monitor Data should display <strong>Reserved</strong> and <strong>Used</strong> percentage, and should equals to the value in Hosts.</li> </ol> Install Harvester over previous GNU/Linux install https://harvester.github.io/tests/manual/_incoming/2230-2450-install-harvester-over-gnu-linux/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2230-2450-install-harvester-over-gnu-linux/ - Related issues: #2230 [BUG] harvester installer - always first attempt failed if before was linux installed Related issues: #2450 [backport v1.0][BUG] harvester installer - always first attempt failed if before was linux installed #2450 Category: Installtion Verification Steps Install GNU/LInux LVM configuration reboot Install Harvester via ISO over previous linux install Verifiy Harvester install by changing password and logging in. Expected Results Install should complete + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2230">#2230</a> [BUG] harvester installer - always first attempt failed if before was linux installed</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2450">#2450</a> [backport v1.0][BUG] harvester installer - always first attempt failed if before was linux installed #2450</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Installtion</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install GNU/LInux LVM configuration</li> <li>reboot</li> <li>Install Harvester via ISO over previous linux install</li> <li>Verifiy Harvester install by changing password and logging in.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install should complete</li> </ol> Instance metadata variables are not expanded https://harvester.github.io/tests/manual/_incoming/2342_instance_metadata_variables_are_not_expanded/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2342_instance_metadata_variables_are_not_expanded/ - Ref: https://github.com/harvester/harvester/issues/2342 Verify Steps: Install Harvester with any nodes Create Image for VM creation Create VM with following CloudConfig ## template: jinja #cloud-config package_update: true password: password chpasswd: { expire: False } sshpwauth: True write_files: - content: | #!/bin/bash vmName=$1 echo &#34;VM Name is: $vmName&#34; &gt; /home/cloudinitscript.log path: /home/exec_initscript.sh permissions: &#39;0755&#39; runcmd: - - systemctl - enable - --now - qemu-guest-agent.service - - echo - &#34;{{ ds.meta_data.local_hostname }}&#34; - - /home/exec_initscript. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2342">https://github.com/harvester/harvester/issues/2342</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177121301-f30bf8ec-0a70-4549-b11b-895161ee30ad.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create Image for VM creation</li> <li>Create VM with following <em>CloudConfig</em></li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e">## template: jinja</span> </span></span><span style="display:flex;"><span><span style="color:#75715e">#cloud-config</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">package_update</span>: <span style="color:#66d9ef">true</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">password</span>: <span style="color:#ae81ff">password</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">chpasswd</span>: { <span style="color:#f92672">expire</span>: <span style="color:#66d9ef">False</span> } </span></span><span style="display:flex;"><span><span style="color:#f92672">sshpwauth</span>: <span style="color:#66d9ef">True</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">write_files</span>: </span></span><span style="display:flex;"><span> - <span style="color:#f92672">content</span>: |<span style="color:#e6db74"> </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> #!/bin/bash </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> vmName=$1 </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> echo &#34;VM Name is: $vmName&#34; &gt; /home/cloudinitscript.log</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">path</span>: <span style="color:#ae81ff">/home/exec_initscript.sh</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">permissions</span>: <span style="color:#e6db74">&#39;0755&#39;</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">runcmd</span>: </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">systemctl</span> </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">enable</span> </span></span><span style="display:flex;"><span> - --<span style="color:#ae81ff">now</span> </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">qemu-guest-agent.service</span> </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">echo</span> </span></span><span style="display:flex;"><span> - <span style="color:#e6db74">&#34;{{ ds.meta_data.local_hostname }}&#34;</span> </span></span><span style="display:flex;"><span> - - <span style="color:#ae81ff">/home/exec_initscript.sh</span> </span></span><span style="display:flex;"><span> - <span style="color:#e6db74">&#34;{{ ds.meta_data.local_hostname }}&#34;</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">packages</span>: </span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">qemu-guest-agent</span> </span></span></code></pr ISO installation console UI Display https://harvester.github.io/tests/manual/_incoming/2402-iso-installation-console-ui-display/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2402-iso-installation-console-ui-display/ - Related issues: #2402 [FEATURE] Enhance the information display of ISO installation console UI (tty) Category: Harvester Installer Verification Steps ISO install a single node Harvester Monitoring the ISO installation console UI ISO install a three node Harvester cluster Monitoring the ISO installation console UI of the first node Monitoring the ISO installation console UI of the second node Monitoring the ISO installation console UI of the third node Expected Results The ISO installation console UI enhancement can display correctly under the following single and multiple nodes scenarios. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2402">#2402</a> [FEATURE] Enhance the information display of ISO installation console UI (tty)</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>ISO install a single node Harvester</li> <li>Monitoring the ISO installation console UI</li> <li>ISO install a three node Harvester cluster</li> <li>Monitoring the ISO installation console UI of the first node</li> <li>Monitoring the ISO installation console UI of the second node</li> <li>Monitoring the ISO installation console UI of the third node</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The ISO installation console UI enhancement can display correctly under the following single and multiple nodes scenarios.</p> Ksmd support merge_across_node on/off https://harvester.github.io/tests/manual/_incoming/2827_ksmd_support_merge_across_node_onoff_/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2827_ksmd_support_merge_across_node_onoff_/ - Ref: https://github.com/harvester/harvester/issues/2827 Verify Steps: Install Harvester with any nodes Login to Dashboard and Navigate to hosts Edit node1&rsquo;s Ksmtuned to Run and ThresCoef to 85 then Click Save Login to node1&rsquo;s console, execute kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt; Fields in spec should be the same as Dashboard configured Create an image for VM creation Create multiple VMs with 2Gi+ memory and schedule on &lt;node1&gt; (memory size reflect to &rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%) Execute watch -n1 grep . + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2827">https://github.com/harvester/harvester/issues/2827</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/193305898-48255477-1d19-48af-b132-3c019bd3f58b.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/193314630-7add9b5a-2d9e-49cb-8d3a-1075531145e8.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard and Navigate to hosts</li> <li>Edit <em>node1</em>&rsquo;s <strong>Ksmtuned</strong> to <code>Run</code> and <strong>ThresCoef</strong> to <code>85</code> then Click <strong>Save</strong></li> <li>Login to <em>node1</em>&rsquo;s console, execute <code>kubectl get ksmtuned -oyaml --field-selector metadata.name=&lt;node1&gt;</code></li> <li>Fields in <code>spec</code> should be the same as Dashboard configured</li> <li>Create an image for VM creation</li> <li>Create multiple VMs with 2Gi+ memory and schedule on <code>&lt;node1&gt;</code> (memory size reflect to <!-- raw HTML omitted -->&rsquo;s maximum size, total of VMs&rsquo; memory should greater than 40%)</li> <li>Execute <code>watch -n1 grep . /sys/kernel/mm/ksm/*</code> to monitor ksm&rsquo;s status change <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be update to <code>1</code> after VMs started</li> <li><code>/sys/kernel/mm/ksm/page_*</code> should updating continuously</li> </ul> </li> <li>Login to Dashboard then navigate to <em>Hosts</em>, click <!-- raw HTML omitted --></li> <li>In the Tab of <strong>Ksmtuned</strong>, values in Statistics section should not be <code>0</code>. (data in this section will be updated per min, so it not equals to console&rsquo;s output was expected.)</li> <li>Update <!-- raw HTML omitted -->&rsquo;s <strong>Ksmtuned</strong> to check <code>Enable Merge Across Nodes</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be updated to <code>2</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be updated to <code>0</code></li> </ul> </li> <li>Restart all VMs scheduling to <code>&lt;node1&gt;</code></li> <li>Monitor data in Step.8 should reflect to: <ul> <li><code>/sys/kernel/mm/ksm/run</code> should be updated to <code>1</code></li> <li><code>/sys/kernel/mm/ksm/pages_*</code> should be updated and less than Step.8 monitored</li> </ul> </li> </ol> Limit VM of guest cluster in the same namespace https://harvester.github.io/tests/manual/_incoming/2354-limit-vm-of-guest-cluster-same-namespace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2354-limit-vm-of-guest-cluster-same-namespace/ - Related issues: #2354 [FEATURE] Limit all VMs of the Harvester guest cluster in the same namespace Category: Rancher integration Verification Steps Import Harvester from Rancher Access Harvester via virtualization management Create a test project and ns1 namespace Create two RKE1 node template, one set to default namespace and another set to ns1 namespace Create a RKE1 cluster, select the first pool using the first node template Create another pool, check can&rsquo;t select the second node template Create a RKE2 cluster, set the first pool using specific namespace Add another machine pool, check it will automatically assigned the same namespace as the first pool Expected Results On RKE2 cluster page, when we select the first machine pool to specific namespace, then the second pool will automatically and can only use the same namespace as the first pool + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2354">#2354</a> [FEATURE] Limit all VMs of the Harvester guest cluster in the same namespace</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Access Harvester via virtualization management</li> <li>Create a test project and <code>ns1</code> namespace</li> <li>Create two RKE1 node template, one set to default namespace and another set to ns1 namespace</li> <li>Create a RKE1 cluster, select the first pool using the first node template</li> <li>Create another pool, check can&rsquo;t select the second node template</li> <li>Create a RKE2 cluster, set the first pool using specific namespace</li> <li>Add another machine pool, check it will automatically assigned the same namespace as the first pool</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>On RKE2 cluster page, when we select the first machine pool to specific namespace, then the second pool will automatically and can only use the same namespace as the first pool</p> Local cluster user input topology key https://harvester.github.io/tests/manual/_incoming/2567-local-cluster-user-input-topology-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2567-local-cluster-user-input-topology-key/ - Related issues: #2567 [BUG] Local cluster owner create Harvester cluster failed(RKE2) Category: Rancher integration Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Create cloud credential of Harvester Login with local user Open the provisioning RKE2 cluster page Select Advanced settings Add Pod Scheduling Select Pods in these namespaces Check can input Topology key value Expected Results Login with cluster owner role and provision a RKE2 cluster we can input the topology key in the Topology key field of the pod selector + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2567">#2567</a> [BUG] Local cluster owner create Harvester cluster failed(RKE2)</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user <code>local</code> in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add <code>local</code> user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Create cloud credential of Harvester</li> <li>Login with <code>local</code> user</li> <li>Open the provisioning RKE2 cluster page</li> <li>Select Advanced settings</li> <li>Add Pod Scheduling</li> <li>Select <code>Pods in these namespaces</code></li> <li>Check can input Topology key value</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login with cluster owner role and provision a RKE2 cluster</li> <li>we can input the topology key in the Topology key field of the pod selector <img src="https://user-images.githubusercontent.com/29251855/182752496-1fa49c1d-1b93-4147-9d5b-ef3a56d5bd2b.png" alt="image"></li> </ol> Logging Output Filter https://harvester.github.io/tests/manual/_incoming/2817-logging-output-filter/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2817-logging-output-filter/ - Related issues: #2817 [BUG]Logging Output needs filter Category: Audit Logging Verification Steps Create an Audit Only type of Output named audit-output Create an Audit Only type of ClusterOutput named audit-cluster-output Create a Flow, select the type to Logging or Event Check you can&rsquo;t select the audit-output and audiot-cluster-output select the type to Audit Check you can select the audit-output and audit-cluster-output Create a ClusterFlow, select the type to Logging or Event Check you can&rsquo;t select the audiot-cluster-output select the type to Audit Check you can select the audiot-cluster-output Create an logging/event type of Output named logging-event-output Create an logging/event type of ClusterOutput named logging-event-cluster-output Create a Flow, select the type to Logging or Event Check you can select the logging-event-output and logging-event-output Create a ClusterFlow, select the type to Logging or Event Check you can select the logging-event-output and logging-event-output Expected Results The logging or the Event type of Flow can only select Logging or Event type of Output Can&rsquo;t select the Audit type of Output The logging or the Event type of ClusterFlow can only select Logging or Event type of ClusterOutput Can&rsquo;t select the Audit type of ClusterOutput The Audit type of Flow can only select Audit type of Output The Audit type of ClusterFlow can only select Audit type of ClusterOutput + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2817">#2817</a> [BUG]Logging Output needs filter</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Audit Logging</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create an <code>Audit Only</code> type of Output named <code>audit-output</code> <img src="https://user-images.githubusercontent.com/29251855/193509247-09f5efd9-c43d-4514-bb84-55cd34c243b1.png" alt="image"></li> <li>Create an <code>Audit Only</code> type of ClusterOutput named <code>audit-cluster-output</code></li> <li>Create a Flow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can&rsquo;t</strong> select the <code>audit-output</code> and <code>audiot-cluster-output</code></li> <li>select the type to <code>Audit </code></li> <li>Check you <strong>can</strong> select the <code>audit-output</code> and <code>audit-cluster-output</code> <img src="https://user-images.githubusercontent.com/29251855/193510780-2f2f6d09-7ee6-433b-80ae-eb3879337513.png" alt="image"></li> <li>Create a ClusterFlow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can&rsquo;t</strong> select the <code>audiot-cluster-output</code></li> <li>select the type to <code>Audit</code></li> <li>Check you <strong>can</strong> select the <code>audiot-cluster-output</code></li> <li>Create an <code>logging/event</code> type of Output named <code>logging-event-output</code> <img src="https://user-images.githubusercontent.com/29251855/193512327-8ff2cadf-d02d-453f-96e9-fbc7d64ad91f.png" alt="image"></li> <li>Create an <code>logging/event</code> type of ClusterOutput named <code>logging-event-cluster-output</code> <img src="https://user-images.githubusercontent.com/29251855/193512534-82d03364-b2f2-4bcb-b676-814ab5a9da6d.png" alt="image"></li> <li>Create a Flow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can</strong> select the <code>logging-event-output</code> and <code>logging-event-output</code></li> <li>Create a ClusterFlow, select the type to <code>Logging</code> or <code>Event</code></li> <li>Check you <strong>can</strong> select the <code>logging-event-output</code> and <code>logging-event-output</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>The <code>logging</code> or the <code>Event</code> type of <code>Flow</code> can only select <code>Logging</code> or <code>Event</code> type of <code>Output</code> <img src="https://user-images.githubusercontent.com/29251855/193512689-d56ddf11-0db8-4a10-ba9f-0425fb22710d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/193512719-4056e234-7e0a-49e4-9503-7bbd75075e0f.png" alt="image"></p> Multiple Disks Swapping Paths https://harvester.github.io/tests/manual/_incoming/1874-extra-disk-swap-path/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1874-extra-disk-swap-path/ - Related issues: #1874 Multiple Disks Swapping Paths Verification Steps Prepare a harvester cluster (single node is sufficient) Prepare two additional disks and format both of them. Hotplug both disks and add them to the host via Harvester Dashboard (&ldquo;Hosts&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo;) Shutdown the host. Swap the address and slot of the two disks in order to make their dev paths swapped For libvirt environment, you can swap &lt;address&gt; and &lt;target&gt; in the XML of the disk. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1874">#1874</a> Multiple Disks Swapping Paths</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare a harvester cluster (single node is sufficient)</li> <li>Prepare two additional disks and format both of them.</li> <li>Hotplug both disks and add them to the host via Harvester Dashboard (&ldquo;Hosts&rdquo; &gt; &ldquo;Edit Config&rdquo; &gt; &ldquo;Disks&rdquo;)</li> <li>Shutdown the host.</li> <li>Swap the address and slot of the two disks in order to make their dev paths swapped <ul> <li>For libvirt environment, you can swap <code>&lt;address&gt;</code> and <code>&lt;target&gt;</code> in the XML of the disk.</li> </ul> </li> <li>Reboot the host</li> <li>Navigate to the &ldquo;Host&rdquo; page, both disks should be healthy and scheduled.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disks should be healthy and <code>scheduable</code> after paths swapped.</li> </ol> Namespace pending on terminating https://harvester.github.io/tests/manual/_incoming/2591_namespace_pending_on_terminating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2591_namespace_pending_on_terminating/ - Ref: https://github.com/harvester/harvester/issues/2591 Verify Steps: Install Harvester with any nodes Login to dashboard and navigate to Namespaces Trying to delete any namespaces, prompt windows should shows warning message + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2591">https://github.com/harvester/harvester/issues/2591</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/185376639-66d10a36-7f68-4689-9cd6-4ef6034f1aac.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to dashboard and navigate to <em>Namespaces</em></li> <li>Trying to delete any namespaces, prompt windows should shows warning message</li> </ol> Negative change backup target while restoring backup https://harvester.github.io/tests/manual/_incoming/2560-change-backup-target-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2560-change-backup-target-while-restoring/ - Related issues: #2560 [BUG] VM hanging on restoring state when backup-target disconnected suddenly Category: Category Verification Steps Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3 Create Image for VM creation Create VM vm1 Take Backup vm1b from vm1 Restore the backup vm1b to New/Existing VM When the VM still in restoring state, update backup-target settings to Use the default value then setup it back. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2560">#2560</a> [BUG] VM hanging on restoring state when backup-target disconnected suddenly</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Category</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3</li> <li>Create Image for VM creation</li> <li>Create VM vm1</li> <li>Take Backup vm1b from vm1</li> <li>Restore the backup vm1b to New/Existing VM</li> <li>When the VM still in restoring state, update backup-target settings to Use the default value then setup it back.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error <img src="https://user-images.githubusercontent.com/5169694/182815277-98baa7bc-42d1-4404-be87-d60f3b6ba1fd.png" alt="image"></li> </ol> Negative Harvester installer input same NIC IP and VIP https://harvester.github.io/tests/manual/_incoming/2229-2377-negative-installer-same-nic-ip-and-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2229-2377-negative-installer-same-nic-ip-and-vip/ - Related issues: #2229 [BUG] input nic ip and vip with same ip address in Harvester-Installer Related issues: #2377 [Backport v1.0.3] input nic ip and vip with same ip address in Harvester-Installer Category: Installation Verification Steps Boot into ISO installer Specify same IP for NIC and VIP Expected Results Error message is displayed + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2229">#2229</a> [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2377">#2377</a> [Backport v1.0.3] input nic ip and vip with same ip address in Harvester-Installer</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Boot into ISO installer</li> <li>Specify same IP for NIC and VIP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Error message is displayed <img src="https://user-images.githubusercontent.com/83787952/178049998-e4eec9fe-d687-4efc-9618-940432d37a3d.png" alt="image"></li> </ol> Negative Restore a backup while VM is restoring https://harvester.github.io/tests/manual/_incoming/2559-negative-restore-backup-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2559-negative-restore-backup-restoring/ - Related issues: #2559 [BUG] Backup unable to be restored and the VM can&rsquo;t be deleted Category: Backup/Restore Verification Steps Install Harvester with any nodes Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3 Create Image for VM creation Create VM vm1 Take backup from vm1 as vm1b Take backup from vm1 as vm1b2 Click Edit YAML of vm1b, update field status.source.spec.spec.domain.cpu.cores, increase 1 Stop VM vm1 Restore backup vm1b2 with Replace Existing Restore backup vm1b with Replace Existing when the VM vm1 still in state restoring Expected Results You should get an error when trying to restore. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2559">#2559</a> [BUG] Backup unable to be restored and the VM can&rsquo;t be deleted</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Backup/Restore</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to Dashboard then navigate to Advanced/Settings, setup backup-target with NFS or S3</li> <li>Create Image for VM creation</li> <li>Create VM vm1</li> <li>Take backup from vm1 as vm1b</li> <li>Take backup from vm1 as vm1b2</li> <li>Click Edit YAML of vm1b, update field status.source.spec.spec.domain.cpu.cores, increase 1</li> <li>Stop VM vm1</li> <li>Restore backup vm1b2 with Replace Existing</li> <li>Restore backup vm1b with Replace Existing when the VM vm1 still in state restoring</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error when trying to restore. <img src="https://user-images.githubusercontent.com/5370752/182722180-3e2f606b-beef-4f8b-8f33-8d235587db4b.png" alt="image"></li> </ol> Networkconfigs function check https://harvester.github.io/tests/manual/_incoming/2841-networkconfigs-function-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2841-networkconfigs-function-check/ - Related issues: #2841 [FEATURE] Reorganize the networkconfigs UI Category: Network Verification Steps Go to Cluster Networks/Configs Create a cluster network and provide the name Create a Network Config Given the NICs that not been used by mgmt-bo (eg. ens1f1) Use default active-backup mode Check the cluster network config in Active status Go to Networks Create a VLAN network Given the name and vlan id Select the cluster network from drop down list Check the vlan route activity Check the NIC ens1f1 can bind to the cnetwork-bo + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2841">#2841</a> [FEATURE] Reorganize the networkconfigs UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Go to Cluster Networks/Configs</p> </li> <li> <p>Create a cluster network and provide the name <img src="https://user-images.githubusercontent.com/29251855/194039791-90a88cc0-879d-44d1-8b81-66a141c13732.png" alt="image"></p> </li> <li> <p>Create a Network Config</p> </li> <li> <p>Given the NICs that not been used by mgmt-bo (eg. <code>ens1f1</code>)<br> <img src="https://user-images.githubusercontent.com/29251855/194040174-72813f78-868f-4d02-9f79-023c61632994.png" alt="image"></p> </li> <li> <p>Use default <code>active-backup</code> mode</p> NIC ip and vip can't be the same in Harvester installer https://harvester.github.io/tests/manual/_incoming/2229-2449-nic-ip-vip-different-harvester-installer-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2229-2449-nic-ip-vip-different-harvester-installer-copy/ - Related issues: #2229 [BUG] input nic ip and vip with same ip address in Harvester-Installer Related issues: #2449 [backport v1.0] [BUG] input nic ip and vip with same ip address in Harvester-Installer Category: Harvester Installer Verification Steps Launch ISO install process Set static node IP and gateway Set the same node IP to the VIP field and press enter Expected Results During Harvester ISO installer process, when we set static node IP address with the same one as the VIP IP address There will be an error message to prevent the installation process VIP must not be the same as Management NIC IP + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2229">#2229</a> [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2449">#2449</a> [backport v1.0] [BUG] input nic ip and vip with same ip address in Harvester-Installer</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Harvester Installer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Launch ISO install process</li> <li>Set static node IP and gateway <img src="https://user-images.githubusercontent.com/29251855/173719118-1fd1609d-74f2-4f7d-9ff3-e1d21227e542.png" alt="image"></li> <li>Set the same node IP to the VIP field and press enter<br> <img src="https://user-images.githubusercontent.com/29251855/173719257-f60b55fd-0211-4fb7-8f45-3176eef4e577.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>During Harvester ISO installer process, when we set static node IP address with the same one as the VIP IP address</li> <li>There will be an error message to prevent the installation process <code>VIP must not be the same as Management NIC IP</code> <img src="https://user-images.githubusercontent.com/29251855/173719257-f60b55fd-0211-4fb7-8f45-3176eef4e577.png" alt="image"></li> </ul> Node disk manager should prevent too many concurrent disk formatting occur within a short period https://harvester.github.io/tests/manual/_incoming/1831_node_disk_manager_should_prevent_too_many_concurrent_disk_formatting_occur_within_a_short_period/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1831_node_disk_manager_should_prevent_too_many_concurrent_disk_formatting_occur_within_a_short_period/ - Ref: https://github.com/harvester/harvester/issues/1831 Criteria exceed the maximum, there should have requeue devices which equals the exceeds hit the maximum, there should not have requeue devices less than maximum, there should not have requeue devices Verify Steps: Install Harvester with any node having at least 6 additional disks Login to console and execute command to update log level to debug and max-concurrent-ops to 1 (On KVM environment, we have to set to 1 to make sure the requeuing will happen. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1831">https://github.com/harvester/harvester/issues/1831</a></p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> exceed the maximum, there should have requeue devices which equals the exceeds</li> <li><input checked="" disabled="" type="checkbox"> hit the maximum, there should not have requeue devices</li> <li><input checked="" disabled="" type="checkbox"> less than maximum, there should not have requeue devices</li> </ul> <p><img src="https://user-images.githubusercontent.com/5169694/177324553-3b4800b2-9db9-45ec-a3cf-a630acb384cf.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any node having at least 6 additional disks</li> <li>Login to console and execute command to update log level to <code>debug</code> and <code>max-concurrent-ops</code> to <code>1</code> (On KVM environment, we have to set to <code>1</code> to make sure the <em>requeuing</em> will happen.) <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;, &quot;--max-concurrent-ops&quot;, &quot;1&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add more than <code>1</code> disks</li> <li>In the console log, should display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> <li>Login to console and execute command to update log level to <code>debug</code> and <code>max-concurrent-ops</code> to <code>2</code> <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;, &quot;--max-concurrent-ops&quot;, &quot;2&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add <code>2</code> disks</li> <li>In the console log, there should not display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> <li>Login to console and execute command to update log level to <code>debug</code> <ul> <li><code>kubectl patch ds -n harvester-system harvester-node-disk-manager --type=json -p'[{&quot;op&quot;:&quot;replace&quot;, &quot;path&quot;:&quot;/spec/template/spec/containers/0/command&quot;, &quot;value&quot;: [&quot;node-disk-manager&quot;, &quot;--debug&quot;]}]'</code></li> </ul> </li> <li>Watching log output by executing <code>kubectl get pods -A | grep node-disk | awk '{system(&quot;kubectl logs -fn &quot;$1&quot; &quot;$2)}'</code></li> <li>Login to dashboard then navigate and edit host to add less than <code>5</code> disks</li> <li>In the console log, there should not display <code>Hit maximum concurrent count. Requeue device &lt;device id&gt;</code></li> <li>In the dashboard, disks should be added successfully.</li> </ol> Node join fails with self-signed certificate https://harvester.github.io/tests/manual/_incoming/2736_node_join_fails_with_self-signed_certificate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2736_node_join_fails_with_self-signed_certificate/ - Ref: https://github.com/harvester/harvester/issues/2736 Verified this bug has been fixed. Test Information Environment: qemu/KVM 2 nodes Harvester Version: master-032742f0-head ui-source Option: Auto Verify Steps: Follow Steps in https://github.com/harvester/harvester-installer/pull/335 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2736">https://github.com/harvester/harvester/issues/2736</a></p> <p>Verified this bug has been fixed.</p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 2 nodes</strong></li> <li>Harvester Version: <strong>master-032742f0-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ul> <li>Follow Steps in <a href="https://github.com/harvester/harvester-installer/pull/335">https://github.com/harvester/harvester-installer/pull/335</a></li> </ul> Node promotion for topology label https://harvester.github.io/tests/manual/_incoming/2325-node-promotion-for-topology-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2325-node-promotion-for-topology-label/ - Related issues: #2325 [FEATURE] Harvester control plane should spread across failure domains Category: Host Verification Steps Install first node, the role of this node should be Management Node Install second node, the role of this node should be Compute Node, the second node shouldn&rsquo;t be promoted to Management Node Add label topology.kubernetes.io/zone=zone1 to the first node Install third node, the second node and third node shouldn&rsquo;t be promoted Add label topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2325">#2325</a> [FEATURE] Harvester control plane should spread across failure domains</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install first node, the role of this node should be Management Node</li> <li>Install second node, the role of this node should be Compute Node, the second node shouldn&rsquo;t be promoted to Management Node</li> <li>Add label topology.kubernetes.io/zone=zone1 to the first node</li> <li>Install third node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Add label topology.kubernetes.io/zone=zone1 to the second node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Add label topology.kubernetes.io/zone=zone3 to the third node, the second node and third node shouldn&rsquo;t be promoted</li> <li>Change the value of label topology.kubernetes.io/zone from zone1 to zone2 in the second node, the second node and third node will be promoted to Management Node one by one</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Checked can pass the following test scenarios.</p> Polish harvester machine config in Rancher https://harvester.github.io/tests/manual/_incoming/2598-polish-harvester-machine-config-in-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2598-polish-harvester-machine-config-in-rancher/ - Related issues: #2598 [BUG]Polish harvester machine config Category: Rancher integration Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Create cloud credential of Harvester Login with local user Open the provisioning RKE2 cluster page Select Advanced settings Add Pod Scheduling Select Pods in these namespaces Check the list of available pods with the namespaces options above Check can input Topology key value Access Harvester UI (Not from Rancher) Open project/namespace Create several namespaces Login local user to Rancher Open the the provisioning RKE2 cluster page Check the available Pods in these namespaces list have been updated Expected Results Checked the following test plan for RKE2 cluster are working as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2598">#2598</a> [BUG]Polish harvester machine config</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user <code>local</code> in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add <code>local</code> user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Create cloud credential of Harvester</li> <li>Login with <code>local</code> user</li> <li>Open the provisioning RKE2 cluster page</li> <li>Select Advanced settings</li> <li>Add Pod Scheduling</li> <li>Select <code>Pods in these namespaces</code></li> <li>Check the list of available pods with the namespaces options above</li> <li>Check can input Topology key value</li> <li>Access Harvester UI (Not from Rancher)</li> <li>Open project/namespace</li> <li>Create several namespaces</li> <li>Login <code>local</code> user to Rancher</li> <li>Open the the provisioning RKE2 cluster page</li> <li>Check the available <code>Pods in these namespaces</code> list have been updated</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Checked the following test plan for <code>RKE2</code> cluster are working as expected</p> Press the Enter key in setting field shouldn't refresh page https://harvester.github.io/tests/manual/_incoming/2569-press-enter-settings-should-not-refresh-page-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2569-press-enter-settings-should-not-refresh-page-copy/ - Related issues: #2569 [BUG] Press the Enter key, the page will be refreshed automatically Category: Settings Verification Steps Check every page have input filed in the Settings page Move cursor to any input field Click the Enter button Check the page will not be automatically loaded Expected Results On v1.0.3 backport, when we press the Enter key in the following page fields, it will not being refreshed automatically. Also checked the following pages + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2569">#2569</a> [BUG] Press the Enter key, the page will be refreshed automatically</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Settings</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Check every page have input filed in the Settings page</li> <li>Move cursor to any input field</li> <li>Click the <code>Enter</code> button</li> <li>Check the page will not be automatically loaded</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>On v1.0.3 backport, when we press the <code>Enter</code> key in the following page fields, it will not being refreshed automatically.</p> Prevent normal users create harvester-public namespace https://harvester.github.io/tests/manual/_incoming/2485-prevent-normal-user-create-harvesterpublic-ns/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2485-prevent-normal-user-create-harvesterpublic-ns/ - Related issues: #2485 [FEATURE] [Harvester Node Driver v2] Prevent normal users from creating VMs in harvester-public namespace Category: Rancher integration Verification Steps Import Harvester from Rancher Create standard user in Rancher User &amp; Authentication Edit Harvester in virtualization Management, assign Cluster Member role to user Login with user Create cloud credential Provision an RKE2 cluster Check the namespace dropdown list Expected Results Now the standard user with cluster member rights won&rsquo;t display harvester-public while user node driver to provision the RKE2 cluster. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2485">#2485</a> [FEATURE] [Harvester Node Driver v2] Prevent normal users from creating VMs in harvester-public namespace</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create standard <code>user</code> in Rancher User &amp; Authentication</li> <li>Edit Harvester in virtualization Management, assign Cluster Member role to user <img src="https://user-images.githubusercontent.com/29251855/191748214-50fd7290-e2ae-4910-9a27-c9b67c581886.png" alt="image"></li> <li>Login with user</li> <li>Create cloud credential</li> <li>Provision an RKE2 cluster</li> <li>Check the namespace dropdown list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Now the standard user with cluster member rights won&rsquo;t display <code>harvester-public</code> while user node driver to provision the RKE2 cluster.</p> Project owner role on customized project open Harvester cluster https://harvester.github.io/tests/manual/_incoming/2394-2395-project-owner-customized-project-open-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2394-2395-project-owner-customized-project-open-harvester/ - Related issues: #2394 [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error Related issues: #2395 [backport v1.0] [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error Category: Rancher integration Verification Steps Import Harvester from Rancher Access Harvester on virtualization management page Create a project test and namespace test under it Go to user authentication page Create a stand rancher user test Access Harvester in Rancher Set project owner role of test project to test user Login Rancher with test user Access the virtualization management page Expected Results Now the standard user with project owner role can access harvester in virtualization management page correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2394">#2394</a> [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2395">#2395</a> [backport v1.0] [BUG] Standard rancher user with project owner role of customized project to access Harvester get &ldquo;404 Not Found&rdquo; error</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Access Harvester on virtualization management page</li> <li>Create a project test and namespace test under it</li> <li>Go to user authentication page</li> <li>Create a stand rancher user test</li> <li>Access Harvester in Rancher</li> <li>Set project owner role of test project to test user</li> <li>Login Rancher with test user</li> <li>Access the virtualization management page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Now the standard user with project owner role can access harvester in virtualization management page correctly <img src="https://user-images.githubusercontent.com/29251855/174706597-f98ecc41-b479-4e5b-b163-02f43c1c6138.png" alt="image"></li> </ul> Project owner should not see additional alert https://harvester.github.io/tests/manual/_incoming/2288-2350-project-owner-should-not-see-alert-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2288-2350-project-owner-should-not-see-alert-copy/ - Related issues: #2288 [BUG] The project-owner user will see an additional alert Related issues: #2350 [Backport v1.0] The project-owner user will see an additional alert Category: Rancher integration Verification Steps Importing a harvester cluster in a rancher cluster enter the imported harvester cluster from the Virtualization Management page create a new Project (test), Create a test namespace in the test project. go to Network page, add vlan 1 create a vm, choose test namespace, choose vlan network, click save create a new user (test), choose Standard User go to the project page, edit test Project, set test user to Project Owner。 login again with test user go to the vm page Expected Results Use rancher standard user test with project owner permission to access Harvester. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2288">#2288</a> [BUG] The project-owner user will see an additional alert</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2350">#2350</a> [Backport v1.0] The project-owner user will see an additional alert</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Importing a harvester cluster in a rancher cluster</li> <li>enter the imported harvester cluster from the <code>Virtualization Management</code> page</li> <li>create a new Project (test), Create a test namespace in the test project.</li> <li>go to <code>Network</code> page, add <code>vlan 1</code></li> <li>create a vm, choose <code>test namespace</code>, choose <code>vlan network</code>, click save</li> <li>create a new user (test), choose <code>Standard User</code></li> <li>go to the <code>project page</code>, edit <code>test</code> Project, set <code>test</code> user to Project Owner。 <!-- raw HTML omitted --></li> <li>login again with <code>test user</code></li> <li>go to the vm page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Use rancher standard user <code>test</code> with project owner permission to access Harvester. Now there is no error alert on the created VM with vlan1 network <img src="https://user-images.githubusercontent.com/29251855/174733151-c8bcffdd-50e0-404e-a5b6-a9ff2f1a7387.png" alt="image"></li> </ul> Promote remaining host when delete one https://harvester.github.io/tests/manual/_incoming/2191-promote-remaining-host-when-delete-one/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2191-promote-remaining-host-when-delete-one/ - Related issues: #2191 [BUG] Promote fail, cluster stays in Provisioning phase Category: Host Verification Steps Create a 4-node Harvester cluster. Wait for three nodes to become control plane nodes (role is control-plane,etcd,master). Delete one of the control plane nodes. The remaining worker node should be promoted to a control plane node (role is control-plane,etcd,master). Expected Results Four nodes Harvester cluster status, before delete one of the control-plane node n1-221021:/etc # kubectl get nodes NAME STATUS ROLES AGE VERSION n1-221021 Ready control-plane,etcd,master 17h v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2191">#2191</a> [BUG] Promote fail, cluster stays in Provisioning phase</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a 4-node Harvester cluster.</li> <li>Wait for three nodes to become control plane nodes (role is control-plane,etcd,master).</li> <li>Delete one of the control plane nodes.</li> <li>The remaining worker node should be promoted to a control plane node (role is control-plane,etcd,master).</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>Four nodes Harvester cluster status, before delete one of the control-plane node</p> rancher-monitoring status when hosting NODE down https://harvester.github.io/tests/manual/_incoming/2243-rancher-monitoring-status-when-hosting-node-down/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2243-rancher-monitoring-status-when-hosting-node-down/ - Related issues: #2243 [BUG] rancher-monitoring is unusable when hosting NODE is (accidently) down Category: Monitoring Verification Steps Install a two nodes harvester cluster Check the Initial state of the 2 nodes Harvester cluster harv-node1-0719:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION harv-node1-0719 Ready control-plane,etcd,master 36m v1.21.11+rke2r1 harv-node2-0719 Ready &lt;none&gt; harv-node1-0719:~ # kubectl get pods -A | grep monitoring cattle-monitoring-system prometheus-rancher-monitoring-prometheus-0 3/3 Running 0 33m cattle-monitoring-system rancher-monitoring-grafana-d9c56d79b-ckbjc 3/3 Running 0 33m harv-node1-0719:~ # kubectl get pods prometheus-rancher-monitoring-prometheus-0 -n cattle-monitoring-system -o yaml | grep nodeName nodeName: harv-node1-0719 Power off both nodes + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2243">#2243</a> [BUG] rancher-monitoring is unusable when hosting NODE is (accidently) down</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Monitoring</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install a two nodes harvester cluster</li> <li>Check the Initial state of the 2 nodes Harvester cluster</li> </ol> <pre tabindex="0"><code>harv-node1-0719:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION harv-node1-0719 Ready control-plane,etcd,master 36m v1.21.11+rke2r1 harv-node2-0719 Ready &lt;none&gt; harv-node1-0719:~ # kubectl get pods -A | grep monitoring cattle-monitoring-system prometheus-rancher-monitoring-prometheus-0 3/3 Running 0 33m cattle-monitoring-system rancher-monitoring-grafana-d9c56d79b-ckbjc 3/3 Running 0 33m harv-node1-0719:~ # kubectl get pods prometheus-rancher-monitoring-prometheus-0 -n cattle-monitoring-system -o yaml | grep nodeName nodeName: harv-node1-0719 </code></pr RBAC Cluster Owner https://harvester.github.io/tests/manual/_incoming/2626-local-cluster-0owner/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2626-local-cluster-0owner/ - Related issues: #2626 [BUG] Access Harvester project/namespace page hangs with no response timeout with local owner role from Rancher Category: Authentication Verification Steps Import Harvester from Rancher Create a standard user local in Rancher User &amp; Authentication Open Cluster Management page Edit cluster config Expand Member Roles Add local user with Cluster Owner role Logout Admin Login with local user Access Harvester from virtualization management Click the Project/Namespace page Expected Results Local owner role user can access and display Harvester project/namespace place correctly without hanging to timeout + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2626">#2626</a> [BUG] Access Harvester project/namespace page hangs with no response timeout with local owner role from Rancher</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Authentication</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester from Rancher</li> <li>Create a standard user local in Rancher User &amp; Authentication</li> <li>Open Cluster Management page</li> <li>Edit cluster config <img src="https://user-images.githubusercontent.com/29251855/182781682-5cdd3c6a-517b-4f61-980d-3ee3cab86745.png" alt="image"></li> <li>Expand Member Roles</li> <li>Add local user with Cluster Owner role <img src="https://user-images.githubusercontent.com/29251855/182781823-b71ba504-6488-4581-b50d-17c333496b8c.png" alt="image"></li> <li>Logout Admin</li> <li>Login with local user</li> <li>Access Harvester from virtualization management</li> <li>Click the Project/Namespace page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Local owner role user can access and display Harvester project/namespace place correctly without hanging to timeout</li> </ol> RBAC Create VM with restricted admin user https://harvester.github.io/tests/manual/_incoming/2587-2116-create-vm-with-restricted-admin/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2587-2116-create-vm-with-restricted-admin/ - Related issues: #2587 [BUG] namespace on create VM is wrong when going through Rancher #2116 [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester Category: Authentication Verification Steps Verification Steps Import Harvester into Rancher Create a restricted admin Navigate to Volumes page Verify you only see associated Volumes Log out of admin and log in to restricted admin Navigate to Harvester UI via virtualization management Open virtual machines tab Click create Verified that namespace was default. + <ul> <li>Related issues:</li> </ul> <ul> <li><a href="https://github.com/harvester/harvester/issues/2587">#2587</a> [BUG] namespace on create VM is wrong when going through Rancher</li> <li><a href="https://github.com/harvester/harvester/issues/2116">#2116</a> [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Authentication</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Verification Steps</li> <li>Import Harvester into Rancher</li> <li>Create a restricted admin</li> <li>Navigate to Volumes page</li> <li>Verify you only see associated Volumes</li> <li>Log out of admin and log in to restricted admin</li> <li>Navigate to Harvester UI via virtualization management</li> <li>Open virtual machines tab</li> <li>Click create</li> <li>Verified that namespace was default.</li> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create with no errors</li> </ol> Reinstall agent node https://harvester.github.io/tests/manual/_incoming/2665-2892-reinstall-agent-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2665-2892-reinstall-agent-node/ - Related issues: #2665 [BUG] reinstall 1st node Related issues: #2892 [BUG] rancher-system-agent keeps showing error on a new node in an upgraded cluster Category: Host Verification Steps Test Plan 1: Reinstall management node and agent node in a upgraded cluster Create a 4-node v1.0.3 cluster. Upgrade the master branch: Check the spec content in provisioning.cattle.io/v1/clusters -&gt; fleet-local Check the iface content in helm.cattle.io/v1/helmchartconfigs -&gt; rke2-canal spec: │ │ valuesContent: |- │ │ flannel: │ │ iface: &#34;&#34; Remove the agent node and 1 management node. + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2665">#2665</a> [BUG] reinstall 1st node</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/2892">#2892</a> [BUG] rancher-system-agent keeps showing error on a new node in an upgraded cluster</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-reinstall-management-node-and-agent-node-in-a-upgraded-cluster">Test Plan 1: Reinstall management node and agent node in a upgraded cluster</h3> <ol> <li> <p>Create a 4-node v1.0.3 cluster.</p> </li> <li> <p>Upgrade the master branch:</p> </li> <li> <p>Check the spec content in <code>provisioning.cattle.io/v1/clusters -&gt; fleet-local</code> <img src="https://user-images.githubusercontent.com/29251855/196139161-7b6e6e84-692d-4f4f-a978-62fc50f64f06.png" alt="image"></p> Remove Pod Scheduling from harvester rke2 and rke1 https://harvester.github.io/tests/manual/_incoming/2642-remove-pod-scheduling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2642-remove-pod-scheduling/ - Related issues: #2642 [BUG] Remove Pod Scheduling from harvester rke2 and rke1 Category: Rancher Test Information Test Environment: 1 node harvester on local kvm machine Harvester version: v1.0-44fb5f1a-head (08/10) Rancher version: v2.6.7-rc7 Environment Setup Prepare Harvester master node Prepare Rancher v2.6.7-rc7 Import Harvester to Rancher Set ui-offline-preferred: Remote Go to Harvester Support page Download Kubeconfig Copy the content of Kubeconfig Verification Steps RKE2 Verification Steps Open Harvester Host page then edit host config Add the following key value in the labels page: topology. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2642">#2642</a> [BUG] Remove Pod Scheduling from harvester rke2 and rke1</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher</li> </ul> <h2 id="test-information">Test Information</h2> <p>Test Environment: 1 node harvester on local kvm machine Harvester version: v1.0-44fb5f1a-head (08/10) Rancher version: v2.6.7-rc7</p> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare Harvester master node</li> <li>Prepare Rancher v2.6.7-rc7</li> <li>Import Harvester to Rancher</li> <li>Set ui-offline-preferred: Remote</li> <li>Go to Harvester Support page</li> <li>Download Kubeconfig</li> <li>Copy the content of Kubeconfig</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h3 id="rke2-verification-steps">RKE2 Verification Steps</h3> <ol> <li>Open Harvester Host page then edit host config</li> <li>Add the following key value in the labels page: <ul> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp <img src="https://user-images.githubusercontent.com/29251855/183802450-a790b9a2-3e2c-4559-8f84-b5a768b9c83d.png" alt="image"></li> </ul> </li> <li>Open the RKE2 provisioning page</li> <li>Expand the show advanced</li> <li>Click add Node selector in Node scheduling</li> <li>Use default Required priority</li> <li>Click Add Rule</li> <li>Provide the following key/value pairs</li> <li>topology.kubernetes.io/zone: zone_bp</li> <li>topology.kubernetes.io/region: region_bp</li> <li>Provide the following user data <pre tabindex="0"><code>password: 123456 chpasswd: { expire: False } ssh_pwauth: True </code></pr Restart Button Web VNC window https://harvester.github.io/tests/manual/_incoming/379-restart-button-web-vnc-window/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/379-restart-button-web-vnc-window/ - Related issues: #379 [Question] Restart Button Web VNC window Category: VM Verification Steps Create a new VM with Ubuntu desktop 20.04 Prepare two volume Complete the installation process Open a web browser on Ubuntu desktop Check the shortcut keys combination Expected Results The soft reboot keys can display and reboot correctly on Linux OS VM (Ubuntu desktop 20.04) + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/379">#379</a> [Question] Restart Button Web VNC window</li> </ul> <h2 id="category">Category:</h2> <ul> <li>VM</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new VM with Ubuntu desktop 20.04</li> <li>Prepare two volume</li> <li>Complete the installation process</li> <li>Open a web browser on Ubuntu desktop</li> <li>Check the shortcut keys combination</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The soft reboot keys can display and reboot correctly on Linux OS VM (Ubuntu desktop 20.04) <img src="https://user-images.githubusercontent.com/29251855/177100026-e67d0101-0a5b-433c-b9ab-e2b4af1a8d0f.png" alt="image"></li> </ol> Restart/Stop VM with in progress Backup https://harvester.github.io/tests/manual/_incoming/1702-do-not-allow-restart-or-stop-vm-when-backup-is-in-progress/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1702-do-not-allow-restart-or-stop-vm-when-backup-is-in-progress/ - Related issues: #1702 Don&rsquo;t allow restart/stop vm when backup is in progress Verification Steps Create a VM. Create a VMBackup for it. Before VMBackup is done, stop/restart the VM. Verify VM can&rsquo;t be stopped/restarted. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1702">#1702</a> Don&rsquo;t allow restart/stop vm when backup is in progress</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM.</li> <li>Create a VMBackup for it.</li> <li>Before VMBackup is done, stop/restart the VM. Verify VM can&rsquo;t be stopped/restarted.</li> </ol> restored VM can not be cloned https://harvester.github.io/tests/manual/_incoming/2968_restored_vm_can_not_be_cloned/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2968_restored_vm_can_not_be_cloned/ - Ref: https://github.com/harvester/harvester/issues/2968 Test Information Environment: qemu/KVM 3 nodes Harvester Version: master-f96827b2-head ui-source Option: Auto Verify Steps: Follow Steps to reproduce in https://github.com/harvester/harvester/issues/2968#issue-1413026149 Additional regression test cases listed in https://github.com/harvester/tests/issues/568#issue-1414534000 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2968">https://github.com/harvester/harvester/issues/2968</a></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment: <strong>qemu/KVM 3 nodes</strong></li> <li>Harvester Version: <strong>master-f96827b2-head</strong></li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> </ul> <h3 id="verify-steps">Verify Steps:</h3> <ul> <li>Follow <strong>Steps to reproduce</strong> in <a href="https://github.com/harvester/harvester/issues/2968#issue-1413026149">https://github.com/harvester/harvester/issues/2968#issue-1413026149</a></li> <li>Additional regression test cases listed in <a href="https://github.com/harvester/tests/issues/568#issue-1414534000">https://github.com/harvester/tests/issues/568#issue-1414534000</a></li> </ul> Restored VM name does not support uppercases https://harvester.github.io/tests/manual/_incoming/4544_restored_vm_name_does_not_support_uppercases/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/4544_restored_vm_name_does_not_support_uppercases/ - Related issues: #4544 [BUG] Unable to restore backup into new VM when the name starts with upper case Category: Backup/Restore Verification Steps Setup backup-target in &lsquo;Advanced&rsquo; -&gt; &lsquo;Settings&rsquo; Create an image for VM creation Create a VM vm1 Take a VM backup vm1b Go to &lsquo;Backup &amp; Snapshot&rsquo;, restore vm1b to new VM Positive Cases Single lower Lowers Lowers contains &lsquo;.&rsquo; Lowers contains &lsquo;-&rsquo; Lowers contains &lsquo;.&rsquo; and &lsquo;-&rsquo; Negtive Cases Upper Upper infront of valid Upper append to valid Upper in the middle of valid Expected Results VM name should comply with following rules: + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/4544">#4544</a> [BUG] Unable to restore backup into new VM when the name starts with upper case</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Backup/Restore</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Setup <code>backup-target</code> in &lsquo;Advanced&rsquo; -&gt; &lsquo;Settings&rsquo;</li> <li>Create an image for VM creation</li> <li>Create a VM <code>vm1</code></li> <li>Take a VM backup <code>vm1b</code></li> <li>Go to &lsquo;Backup &amp; Snapshot&rsquo;, restore <code>vm1b</code> to new VM</li> </ol> <h3 id="positive-cases">Positive Cases</h3> <ol> <li>Single lower</li> <li>Lowers</li> <li>Lowers contains &lsquo;.&rsquo;</li> <li>Lowers contains &lsquo;-&rsquo;</li> <li>Lowers contains &lsquo;.&rsquo; and &lsquo;-&rsquo; <img src="https://user-images.githubusercontent.com/2773781/270225975-17fea11e-a266-484d-a9d4-3e3af1624d45.png" alt="image"></li> </ol> <h3 id="negtive-cases">Negtive Cases</h3> <ol> <li> <p>Upper <img src="https://github.com/harvester/harvester/assets/2773781/b2411e02-e0c1-4fef-b996-997c8c827862" alt="image"></p> Restricted admin should not see cattle-monitoring-system volumes https://harvester.github.io/tests/manual/_incoming/2116-2351-restricted-admin-no-cattle-monitoring-system-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2116-2351-restricted-admin-no-cattle-monitoring-system-volumes/ - Related issues: #2116 [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester Related issues: #2351 [Backport v1.0] You can see cattle-monitoring-system volumes as restricted admin in Harvester Category: Rancher integration Verification Steps Import Harvester to Rancher Create restricted admin in Rancher Log out of rancher Log in as restricted admin Navigate to Harvester ui in virtualization management Navigate to volumes page Expected Results Login Rancher with restricted admin and access Harvester volume page. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2116">#2116</a> [BUG] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2351">#2351</a> [Backport v1.0] You can see cattle-monitoring-system volumes as restricted admin in Harvester</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import Harvester to Rancher</li> <li>Create restricted admin in Rancher</li> <li>Log out of rancher</li> <li>Log in as restricted admin</li> <li>Navigate to Harvester ui in virtualization management</li> <li>Navigate to volumes page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Login Rancher with restricted admin and access Harvester volume page. Now it won&rsquo;t display the cattle-monitoring-system volumes. <img src="https://user-images.githubusercontent.com/29251855/174289481-00e74f70-c773-47af-847c-9ca6ecd86e1d.png" alt="image"></li> </ul> Setup and test local Harvester upgrade responder https://harvester.github.io/tests/manual/_incoming/1849-setup-test-local-upgrade-responder/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1849-setup-test-local-upgrade-responder/ - Related issues: #1849 [Task] Improve Harvester upgrade responder Category: Upgrade Verification Steps Follow the steps in https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017 Clone longhorn/upgrade-responder and checkout to v0.1.4. Edit response.json content in config folder { &#34;Versions&#34;: [ { &#34;Name&#34;: &#34;v1.0.2-master-head&#34;, &#34;ReleaseDate&#34;: &#34;2022-06-15T00:00:00Z&#34;, &#34;Tags&#34;: [ &#34;latest&#34;, &#34;test&#34;, &#34;dev&#34; ] } ] } Install InfluxDB Run longhorn/upgrade-responder with the command: go run main.go --debug start --upgrade-response-config config/response.json --influxdb-url http://localhost:8086 --geodb geodb/GeoLite2-City.mmdb --application-name harvester Check the local upgrade responder is running curl -X POST http://localhost:8314/v1/checkupgrade \ -d &#39;{ &#34;appVersion&#34;: &#34;v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1849">#1849</a> [Task] Improve Harvester upgrade responder</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <p>Follow the steps in <a href="https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017">https://github.com/harvester/harvester/issues/1849#issuecomment-1180346017</a></p> <ol> <li>Clone <a href="https://github.com/longhorn/upgrade-responder">longhorn/upgrade-responder</a> and checkout to <a href="https://github.com/longhorn/upgrade-responder/releases/tag/v0.1.4">v0.1.4</a>.</li> <li>Edit <a href="https://github.com/longhorn/upgrade-responder/blob/master/config/response.json">response.json</a> content in config folder</li> </ol> <pre tabindex="0"><code>{ &#34;Versions&#34;: [ { &#34;Name&#34;: &#34;v1.0.2-master-head&#34;, &#34;ReleaseDate&#34;: &#34;2022-06-15T00:00:00Z&#34;, &#34;Tags&#34;: [ &#34;latest&#34;, &#34;test&#34;, &#34;dev&#34; ] } ] } </code></pr Support configuring a VLAN at the management interface in installer config https://harvester.github.io/tests/manual/_incoming/1390_support_configuring_a_vlan_at_the_management_interface_in_installer_config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1390_support_configuring_a_vlan_at_the_management_interface_in_installer_config/ - Ref: https://github.com/harvester/harvester/issues/1390, https://github.com/harvester/harvester/issues/1647 Verify Steps: Install Harvester with any nodes from PXE Boot with configurd vlan with vlan_id Harvester should installed successfully Login to console, execute ip a s dev mgmt-br.&lt;vlan_id&gt; should have IP and accessible Dashboard should be accessible + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1390">https://github.com/harvester/harvester/issues/1390</a>, <a href="https://github.com/harvester/harvester/issues/1647">https://github.com/harvester/harvester/issues/1647</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192803102-5062546d-ec36-4ecc-a1f3-4e6ec6c7a620.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes from PXE Boot with configurd vlan with <code>vlan_id</code></li> <li>Harvester should installed successfully</li> <li>Login to console, execute <code>ip a s dev mgmt-br.&lt;vlan_id&gt;</code> should have IP and accessible</li> <li>Dashboard should be accessible</li> </ol> Support multiple VLAN physical interfaces https://harvester.github.io/tests/manual/_incoming/2259-multiple-vlan-physical-interfaces/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2259-multiple-vlan-physical-interfaces/ - Related issues: #2259 [FEATURE] Support multiple VLAN physical interfaces Category: Network Verification Steps Create cluster network cn1 Create a vlanconfig config-n1 on cn1 which applied to node 1 only Select an available NIC on the Uplink Create a vlan, the cluster network cn1 vlanconfig and provide valid vlan id 91 Create cluster network cn2 Create a vlanconfig config-n2 on cn2 which applied to node 2 only Select an available NIC on the Uplink Create a vlan, the cluster network cn2 vlanconfig and provide valid vlan id 92 Create cluster network cn3 Create a vlanconfig config-n3 on cn3 which applied to node 3 only Select an available NIC on the Uplink Create a vlan, select the cluster network cn3 vlanconfig and provide valid vlan id 93 Create a VM, use the vlan id 1 and specific at any node Create a VM, use the vlan id 91 and specified at node1 Create another VM, use the vlan id 92 Expected Results Can create different vlan on each cluster network Can create VM using vlan id 91 and retrieve IP address correctly Can create VM using vlan id 92 and retrieve IP address correctly Can create VM using vlan id 1 and retrieve IP address correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2259">#2259</a> [FEATURE] Support multiple VLAN physical interfaces</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create cluster network <code>cn1</code> <img src="https://user-images.githubusercontent.com/29251855/196580297-57541544-48f5-4492-b3e9-a3450697f490.png" alt="image"></p> </li> <li> <p>Create a vlanconfig <code>config-n1</code> on <code>cn1</code> which applied to node 1 only <img src="https://user-images.githubusercontent.com/29251855/196580491-0572c539-5828-4f2e-a0a6-59b40fcc549b.png" alt="image"></p> </li> <li> <p>Select an available NIC on the Uplink <img src="https://user-images.githubusercontent.com/29251855/196580574-d38d59de-251c-4cf8-885d-655b76a78659.png" alt="image"></p> </li> <li> <p>Create a vlan, the cluster network <code>cn1</code> vlanconfig and provide valid vlan id <code>91</code> <img src="https://user-images.githubusercontent.com/29251855/196584602-b663ca69-da9a-42e3-94e0-41e094ff1d0b.png" alt="image"></p> Support private registry for Rancher agent image in Air-gap https://harvester.github.io/tests/manual/_incoming/2176-airgap-private-registry-rancher-agent-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2176-airgap-private-registry-rancher-agent-image/ - Related issues: #2176 [Enhancement] Air-gap operation: Support using a private registry for Rancher agent image Category: Rancher Integration Verification Steps Environment Setup Use vagrant-pxe-harvester to create a harvester cluster. Create another VM myregistry and set it in the same virtual network. In myregistry VM: Install docker. Run following commands: mkdir auth docker run \ --entrypoint htpasswd \ httpd:2 -Bbn testuser testpassword &gt; auth/htpasswd mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2176">#2176</a> [Enhancement] Air-gap operation: Support using a private registry for Rancher agent image</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Use vagrant-pxe-harvester to create a harvester cluster.</li> <li>Create another VM <code>myregistry</code> and set it in the same virtual network.</li> <li>In <code>myregistry</code> VM: <ul> <li>Install docker.</li> <li>Run following commands:</li> </ul> <pre tabindex="0"><code>mkdir auth docker run \ --entrypoint htpasswd \ httpd:2 -Bbn testuser testpassword &gt; auth/htpasswd mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -addext &#34;subjectAltName = DNS:myregistry.local&#34; \ -x509 -days 365 -out certs/domain.crt sudo mkdir -p /etc/docker/certs.d/myregistry.local:5000 sudo cp certs/domain.crt /etc/docker/certs.d/myregistry.local:5000/domain.crt docker run -d \ -p 5000:5000 \ --restart=always \ --name registry \ -v &#34;$(pwd)&#34;/certs:/certs \ -v &#34;$(pwd)&#34;/registry:/var/lib/registry \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -v &#34;$(pwd)&#34;/auth:/auth \ -e &#34;REGISTRY_AUTH=htpasswd&#34; \ -e &#34;REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm&#34; \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ registry:2 </code></pr Support Volume Clone https://harvester.github.io/tests/manual/_incoming/2293_support_volume_clone/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2293_support_volume_clone/ - Ref: https://github.com/harvester/harvester/issues/2293 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Navigate to Volumes, clone disk-0 and disk-1 which attached to vm1 by clicking Clone Volume Create vm2 with cloned disk-0 and disk-1 vm2 should started successfully Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2293">https://github.com/harvester/harvester/issues/2293</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Navigate to <em>Volumes</em>, clone <em>disk-0</em> and <em>disk-1</em> which attached to <code>vm1</code> by clicking <code>Clone Volume</code></li> <li>Create <code>vm2</code> with cloned <em>disk-0</em> and <em>disk-1</em></li> <li><code>vm2</code> should started successfully</li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a vdb/test</code></li> </ul> </li> <li>Navigate to Volumes, then clone <code>disk-1</code> of <strong>vm1</strong> into <strong>vm1-disk-2</strong></li> <li>Navigate to Virtual Machines, then update <code>vm1</code> to add existing volume <code>vm1-disk-2</code></li> <li>Login to <code>vm1</code> then mount <code>/dev/vdb1</code>(disk-1) and <code>/dev/vdc1</code>(disk-2) into <em>vdb</em> and <em>vdc</em></li> <li>test file should be appeared in both folders of <em>vdb</em> and <em>vdc</em></li> <li>test file should not be empty in both folders of <em>vdb</em> and <em>vdc</em></li> </ol> Support Volume Snapshot https://harvester.github.io/tests/manual/_incoming/2294_support_volume_snapshot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2294_support_volume_snapshot/ - Ref: https://github.com/harvester/harvester/issues/2294 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127.0.0.1 | tee -a vdb/test Navigate to Volumes, then click Take Snapshot button on disk-1 of vm1 into vm1-disk-2 Navigate to Virtual Machines, then update vm1 to add existing volume vm1-disk-2 Login to vm1 then mount /dev/vdb1(disk-1) and /dev/vdc1(disk-2) into vdb and vdc test file should be appeared in both folders of vdb and vdc test file should not be empty in both folders of vdb and vdc + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2294">https://github.com/harvester/harvester/issues/2294</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a vdb/test</code></li> </ul> </li> <li>Navigate to Volumes, then click <strong>Take Snapshot</strong> button on <code>disk-1</code> of <strong>vm1</strong> into <strong>vm1-disk-2</strong></li> <li>Navigate to Virtual Machines, then update <code>vm1</code> to add existing volume <code>vm1-disk-2</code></li> <li>Login to <code>vm1</code> then mount <code>/dev/vdb1</code>(disk-1) and <code>/dev/vdc1</code>(disk-2) into <em>vdb</em> and <em>vdc</em></li> <li>test file should be appeared in both folders of <em>vdb</em> and <em>vdc</em></li> <li>test file should not be empty in both folders of <em>vdb</em> and <em>vdc</em></li> </ol> Sync harvester node's topology labels to rke2 guest-cluster's node https://harvester.github.io/tests/manual/_incoming/1418-sync-topology-labels-to-rke2-guest-cluster-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1418-sync-topology-labels-to-rke2-guest-cluster-node/ - Related issues: #1418 Support topology aware scheduling of guest cluster workloads Verification Steps Add topology labels(topology.kubernetes.io/region, topology.kubernetes.io/zone) to the Harvester node: In Harvester UI, select Hosts page. Click hosts&rsquo; Edit Config. Select Labels page, click Add Labels. Fill in, eg, Key: topology.kubernetes.io/zone, Value: zone1. Create harvester guest-cluster from rancher-UI. Wait for the guest-cluster to be created successfully and check if the guest-cluster node labels are consistent with the harvester nodes. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1418">#1418</a> Support topology aware scheduling of guest cluster workloads</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Add <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesioregion">topology labels</a>(<code>topology.kubernetes.io/region</code>, <code>topology.kubernetes.io/zone</code>) to the Harvester node:</p> <ul> <li>In Harvester UI, select <code>Hosts</code> page.</li> <li>Click hosts&rsquo; <code>Edit Config</code>.</li> <li>Select <code>Labels</code> page, click <code>Add Labels</code>.</li> <li>Fill in, eg, Key: <code>topology.kubernetes.io/zone</code>, Value: <code>zone1</code>.</li> </ul> </li> <li> <p>Create harvester guest-cluster from rancher-UI.</p> </li> <li> <p>Wait for the guest-cluster to be created successfully and check if the guest-cluster node labels are consistent with the harvester nodes.</p> Sync image display name to image labels https://harvester.github.io/tests/manual/_incoming/2630-sync-image-display-name-to-image-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2630-sync-image-display-name-to-image-labels/ - Related issues: #2630 [FEATURE] Sync image display_name to image labels Category: Image Verification Steps Login harvester dashboard Access the Preference page Enable developer tool Create an ubuntu focal image from url https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img View API of the created image Check can found the display name in the image API content Create the same ubuntu focal image from previous url again which would bring the same display name Check would be denied with error message Create a different ubuntu focal image with the same display name Expected Results In image API content, label harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2630">#2630</a> [FEATURE] Sync image display_name to image labels</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Image</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Login harvester dashboard</li> <li>Access the Preference page</li> <li>Enable developer tool <img src="https://user-images.githubusercontent.com/29251855/187353113-495af11e-a3e5-4f8e-b03b-174b4f0660ea.png" alt="image"></li> <li>Create an ubuntu focal image from url <a href="https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img</a> <img src="https://user-images.githubusercontent.com/29251855/187353177-52516c6d-8e68-4ac5-8b40-4006f6460773.png" alt="image"></li> <li>View API of the created image <img src="https://user-images.githubusercontent.com/29251855/187353338-1f0691f3-b19a-4382-a26f-ab5897842474.png" alt="image"></li> <li>Check can found the display name in the image API content</li> <li>Create the same ubuntu focal image from previous url again which would bring the same display name</li> <li>Check would be denied with error message</li> <li>Create a different ubuntu focal image with the same display name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>In image API content, label <code>harvesterhci.io/imageDisplayName</code> added to labels, and it&rsquo;s value should be the displayName value <img src="https://user-images.githubusercontent.com/29251855/187353496-39c20027-f438-43de-a212-4f38b2dfbbae.png" alt="image"></li> <li>Image with the same display name in label would be denied by admission webhook &ldquo;validator.harvesterhci.io&rdquo; <img src="https://user-images.githubusercontent.com/29251855/187354352-ea2f08f3-01a1-4088-899b-d92e25433781.png" alt="image"></li> <li>Image with the same display name but different url would also be denied <img src="https://user-images.githubusercontent.com/29251855/187355241-845b09b5-953b-4e90-9948-ca8b025a6f5d.png" alt="image"></li> </ol> template with EFI (e2e_fe) https://harvester.github.io/tests/manual/_incoming/2577-template-with-efi/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2577-template-with-efi/ - Related issues: #2577 [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected. Category Template Verification Steps Go to Template, create a VM template with Boot in EFI mode selected. Go to Virtual Machines, click Create, select Multiple instance, type in a random name prefix, and select the VM template we just created. Go to Advanced Options, for now this EFI checkbox should be checked without any issue. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2577">#2577</a> [BUG] Boot in EFI mode not selected when creating multiple VM instances using VM template with EFI mode selected.</li> </ul> <h2 id="category">Category</h2> <ul> <li>Template</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Go to Template, create a VM template with Boot in EFI mode selected. <img src="https://user-images.githubusercontent.com/9990804/181196319-d95a4d23-ea31-418c-9fd2-152821d56930.png" alt="image"></li> <li>Go to Virtual Machines, click Create, select Multiple instance, type in a random name prefix, and select the VM template we just created. <img src="image.png" alt="image"></li> <li>Go to Advanced Options, for now this EFI checkbox should be checked without any issue. <img src="https://user-images.githubusercontent.com/9990804/181196934-1249902f-47dd-44dc-bced-5911ffcfdf16.png" alt="image"></li> <li>Create a VM with template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check VM setting, the booting in EFI mode is checked <img src="https://user-images.githubusercontent.com/29251855/182343254-4a421a04-aa3f-471c-a258-930a98cc84d3.png" alt="image"></li> <li>Verify that VM is running with UEFI using</li> </ol> <pre tabindex="0"><code>ubuntu@efi-01:~$ ls /sys/firmware/ acpi dmi efi memmap </code></pr Terraform import VLAN https://harvester.github.io/tests/manual/_incoming/2261-terraform-import-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2261-terraform-import-vlan/ - Related issues: #2261 [FEATURE] enhance terraform network to not pruge route_cidr and route_gateway Category: Terraform Verification Steps Install Harvester with any nodes Install terraform-harvester-provider (using master-head for testing) Execute terraform init Create the file network.tf as following snippets, then execute terraform import harvester_clusternetwork.vlan vlan to import default vlan settings resource &#34;harvester_clusternetwork&#34; &#34;vlan&#34; { name = &#34;vlan&#34; enable = true default_physical_nic = &#34;harvester-mgmt&#34; } resource &#34;harvester_network&#34; &#34;vlan1&#34; { name = &#34;vlan1&#34; namespace = &#34;harvester-public&#34; vlan_id = 1 route_mode = &#34;auto&#34; } execute terraform apply Login to dashboard then navigate to Advanced/Networks, make sure the Route Connectivity becomes Active Execute terraform apply again and many more times Expected Results Resources should not be changed or added or destroyed. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2261">#2261</a> [FEATURE] enhance terraform network to not pruge route_cidr and route_gateway</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraform</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Install terraform-harvester-provider (using master-head for testing)</li> <li>Execute <code>terraform init</code></li> <li>Create the file network.tf as following snippets, then execute <code>terraform import harvester_clusternetwork.vlan vlan</code> to import default vlan settings</li> </ol> <pre tabindex="0"><code>resource &#34;harvester_clusternetwork&#34; &#34;vlan&#34; { name = &#34;vlan&#34; enable = true default_physical_nic = &#34;harvester-mgmt&#34; } resource &#34;harvester_network&#34; &#34;vlan1&#34; { name = &#34;vlan1&#34; namespace = &#34;harvester-public&#34; vlan_id = 1 route_mode = &#34;auto&#34; } </code></pr Terraformer import KUBECONFIG https://harvester.github.io/tests/manual/_incoming/2604-terraformer-import-kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2604-terraformer-import-kubeconfig/ - Related issues: #2604 [BUG] Terraformer imported VLAN always be 0 Category: Terraformer Verification Steps Install Harvester with any nodes Login to dashboard, navigate to: Advanced/Settings -&gt; then enabledvlan` Navigate to Advanced/Networks and Create a Network which Vlan ID is not 0 Navigate to Support Page and Download KubeConfig file Initialize a terraform environment, download Harvester Terraformer Execute command terraformer import harvester -r network to generate terraform configuration from the cluster Generated file generated/harvester/network/network. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2604">#2604</a> [BUG] Terraformer imported VLAN always be 0</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Terraformer</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester with any nodes</li> <li>Login to dashboard, navigate to: Advanced/Settings -&gt; then enabledvlan`</li> <li>Navigate to Advanced/Networks and Create a Network which Vlan ID is not 0</li> <li>Navigate to Support Page and Download KubeConfig file</li> <li>Initialize a terraform environment, download Harvester Terraformer</li> <li>Execute command <code>terraformer import harvester -r network</code> to generate terraform configuration from the cluster</li> <li>Generated file <code>generated/harvester/network/network.tf</code> should exists</li> <li>VLAN and other settings should match</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>vlan_id should be the same as the import cluster.</li> </ol> Testing Harvester Storage Tiering https://harvester.github.io/tests/manual/_incoming/2147-testing-storage-tiering/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2147-testing-storage-tiering/ - Related issues: #2147 [[FEATURE] Storage Tiering Category: Images Volumes VirtualMachines Test Setup Steps Have a Harvester Node with 3 Disks in total (one main disk, two additional disks), ideally the two additional disks should be roughly 20/30Gi for testing Add the additional disks to the harvester node (you may first need to be on the node itself and do a sudo gdisk /dev/sda and then w and y to write the disk identifier so that Harvester can recogonize the disk, note you shouldn&rsquo;t need to build partitions) Add the disks to the Harvester node via: Hosts -&gt; Edit Config -&gt; Storage -&gt; &ldquo;Add Disk&rdquo; (call-to-action), they should auto populate with available disks that you can add Save Navigate back to Hosts -&gt; Host -&gt; Edit Config -&gt; Storage, then add a Host Tag, and a unique disk tag for every disk (including the main disk/default-disk) Verification Steps with Checks Navigate to Advanced -&gt; Storage Classes -&gt; Create (Call-To-Action), create a storageClass &ldquo;sc-a&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12) Also create a storageClass &ldquo;sc-b&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12) Create a new image img-a, specify storageClassName to sc-a Create a new vm vm1 use the image img-a Check the replicas number and location of rootdisk volume in longhorn UI Create a new volume volume-a by choose source=image img-a Add the volume volume-a to vm vm1 Check the replicas number and location of volume volume-a in longhorn UI: volume-a, should also be seen in kubectl get pv --all-namespaces (where &ldquo;Claim&rdquo; is volume-a) with the appropriate storage class also with something like kubectl describe pv/pvc-your-uuid-from-get-pv-call-with-volume-a --all-namespaces: can audit volume attributes like: VolumeAttributes: diskSelector=second migratable=true nodeSelector=node-2 numberOfReplicas=1 share=true staleReplicaTimeout=30 storage. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2147">#2147</a> [[FEATURE] Storage Tiering</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Images</li> <li>Volumes</li> <li>VirtualMachines</li> </ul> <h2 id="test-setup-steps">Test Setup Steps</h2> <ol> <li>Have a Harvester Node with 3 Disks in total (one main disk, two additional disks), ideally the two additional disks should be roughly 20/30Gi for testing</li> <li>Add the additional disks to the harvester node (you may first need to be on the node itself and do a <code>sudo gdisk /dev/sda</code> and then <code>w</code> and <code>y</code> to write the disk identifier so that Harvester can recogonize the disk, note you shouldn&rsquo;t need to build partitions)</li> <li>Add the disks to the Harvester node via: Hosts -&gt; Edit Config -&gt; Storage -&gt; &ldquo;Add Disk&rdquo; (call-to-action), they should auto populate with available disks that you can add</li> <li>Save</li> <li>Navigate back to Hosts -&gt; Host -&gt; Edit Config -&gt; Storage, then add a Host Tag, and a unique disk tag for every disk (including the main disk/default-disk)</li> </ol> <h2 id="verification-steps-with-checks">Verification Steps with Checks</h2> <ol> <li>Navigate to Advanced -&gt; Storage Classes -&gt; Create (Call-To-Action), create a storageClass &ldquo;sc-a&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12)</li> <li>Also create a storageClass &ldquo;sc-b&rdquo;, specify nodeSelector (choose host), diskSelector (choose one of the unique disk tags), number of replicas (1-12)</li> <li>Create a new image img-a, specify storageClassName to sc-a</li> <li>Create a new vm vm1 use the image img-a</li> <li>Check the replicas number and location of rootdisk volume in longhorn UI</li> <li>Create a new volume volume-a by choose source=image img-a</li> <li>Add the volume volume-a to vm vm1</li> <li>Check the replicas number and location of volume volume-a in longhorn UI: <ol> <li>volume-a, should also be seen in <code>kubectl get pv --all-namespaces</code> (where &ldquo;Claim&rdquo; is volume-a) with the appropriate storage class</li> <li>also with something like <code>kubectl describe pv/pvc-your-uuid-from-get-pv-call-with-volume-a --all-namespaces</code>: <ol> <li>can audit volume attributes like:</li> </ol> <pre tabindex="0"><code> VolumeAttributes: diskSelector=second migratable=true nodeSelector=node-2 numberOfReplicas=1 share=true staleReplicaTimeout=30 storage.kubernetes.io/csiProvisionerIdentity=1665780638152-8081-driver.longhorn.io </code></pr The count of volume snapshots should not include VM's snapshots https://harvester.github.io/tests/manual/_incoming/3004-volume-snaphost-not-include-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/3004-volume-snaphost-not-include-vm/ - Related issues: #3004 [BUG] The count of volume snapshots should not include VM&rsquo;s snapshots Category: Volume Verification Steps Create a VM vm1 Take a VM snapshot Check the volume snapshot page Check the VM snapshot page Expected Results When one VM is created Only VM snap are created The count of volume snapshots should not include VM&rsquo;s snapshots. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3004">#3004</a> [BUG] The count of volume snapshots should not include VM&rsquo;s snapshots</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM <code>vm1</code></li> <li>Take a VM snapshot</li> <li>Check the volume snapshot page</li> <li>Check the VM snapshot page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>When one VM is created <img src="https://user-images.githubusercontent.com/29251855/197482909-baf7d1f4-4032-4180-bb88-22aac8b9a8bc.png" alt="image"></p> </li> <li> <p>Only VM snap are created <img src="https://user-images.githubusercontent.com/29251855/197484294-46b89b29-78be-4d28-a33c-77aa525850a8.png" alt="image"></p> </li> <li> <p>The count of volume snapshots should not include VM&rsquo;s snapshots. <img src="https://user-images.githubusercontent.com/29251855/197484528-ed4c562b-782b-400e-99ec-fa97e292568d.png" alt="image"></p> Topology aware scheduling of guest cluster workloads https://harvester.github.io/tests/manual/_incoming/1418-2383-topology-scheduling-of-guest-cluster-workloads/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1418-2383-topology-scheduling-of-guest-cluster-workloads/ - Related issues: #1418 [FEATURE] Support topology aware scheduling of guest cluster workloads Related issues: #2383 [backport v1.0.3] [FEATURE] Support topology aware scheduling of guest cluster workloads Category: Rancher integration Verification Steps Environment preparation as above steps Access Harvester node config page Add the following node labels with values topology.kubernetes.io/zone topology.kubernetes.io/region Provision an RKE2 cluster Wait for the provisioning complete Access RKE2 guest cluster Access the RKE2 cluster in Cluster Management page Click + to add another node Access the RKE2 cluster node page Wait until the second node created Edit yaml of the second node Check the harvester node label have propagated to the guest cluster node Expected Results The topology encoded in the Harvester cluster node labels Can be correctly propagated to the additional node of the RKE2 guest cluster + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1418">#1418</a> [FEATURE] Support topology aware scheduling of guest cluster workloads</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2383">#2383</a> [backport v1.0.3] [FEATURE] Support topology aware scheduling of guest cluster workloads</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Environment preparation as above steps</li> <li>Access Harvester node config page</li> <li>Add the following node labels with values <ul> <li>topology.kubernetes.io/zone</li> <li>topology.kubernetes.io/region</li> </ul> </li> <li>Provision an RKE2 cluster</li> <li>Wait for the provisioning complete</li> <li>Access RKE2 guest cluster</li> <li>Access the RKE2 cluster in Cluster Management page</li> <li>Click + to add another node <img src="https://user-images.githubusercontent.com/29251855/177774100-63c1a229-19d4-45f7-bd4e-8d2453c9149f.png" alt="image"></li> <li>Access the RKE2 cluster node page <img src="https://user-images.githubusercontent.com/29251855/177774234-ed001086-75a2-46e7-9638-0771cc790fad.png" alt="image"></li> <li>Wait until the second node created <img src="https://user-images.githubusercontent.com/29251855/177774368-0c8b6ac1-15f0-4a64-8945-85551dc85e4f.png" alt="image"></li> <li>Edit yaml of the second node</li> <li>Check the harvester node label have propagated to the guest cluster node <img src="https://user-images.githubusercontent.com/29251855/177774559-8f278b2d-fff0-48ec-a62f-ceb3a9da8cc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li> <p>The topology encoded in the Harvester cluster node labels <img src="https://user-images.githubusercontent.com/29251855/177771658-1e3a8336-61c7-459d-9d4f-19e626ce9f23.png" alt="image"></p> Unable to stop VM which in starting state https://harvester.github.io/tests/manual/_incoming/2263_unable_to_stop_vm_which_in_starting_state/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2263_unable_to_stop_vm_which_in_starting_state/ - Ref: https://github.com/harvester/harvester/issues/2263 Verify Steps: Install Harvester with any nodes Create an Windows iso image for VM creation Create the Windows VM by using the iso image When the VM in Starting state, Stop button should able to click and work as expected + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2263">https://github.com/harvester/harvester/issues/2263</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Windows iso image for VM creation</li> <li>Create the Windows VM by using the iso image</li> <li>When the VM in <strong>Starting</strong> state, <strong>Stop</strong> button should able to click and work as expected</li> </ol> Upgrade guest cluster kubernetes version can also update the cloud provider chart version https://harvester.github.io/tests/manual/_incoming/2546-upgrade-guest-k8s-version-upgrade-cloud-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2546-upgrade-guest-k8s-version-upgrade-cloud-provider/ - Related issues: #2546 [BUG] Harvester Cloud Provider is not able to deploy upgraded container after upgrading the cluster Category: Rancher integration Verification Steps Prepare the previous stable Rancher rc version and Harvester Update rke-metadata-config to {&quot;refresh-interval-minutes&quot;:&quot;1440&quot;,&quot;url&quot;:&quot;https://yufa-dev.s3.ap-east-1.amazonaws.com/data.json&quot;} in global settings Update the ui-dashboard-index to https://releases.rancher.com/dashboard/latest/index.html Set ui-offline-preferred to Remote Refresh web page (ctrl + r) Open Create RKE2 cluster page Check the show deprecated kubernetes patched versions Select v1.23.8+rke2r1 Finish the RKE2 cluster provision Check the current cloud provider version in workload page Edit RKE2 cluster, upgrade the kubernetes version to 1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2546">#2546</a> [BUG] Harvester Cloud Provider is not able to deploy upgraded container after upgrading the cluster</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher integration</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare the previous stable Rancher rc version and Harvester</li> <li>Update <code>rke-metadata-config</code> to <code>{&quot;refresh-interval-minutes&quot;:&quot;1440&quot;,&quot;url&quot;:&quot;https://yufa-dev.s3.ap-east-1.amazonaws.com/data.json&quot;}</code> in global settings <img src="https://user-images.githubusercontent.com/29251855/180735267-939e92e3-7fd5-4659-8bc8-ab14c95161d8.png" alt="image"></li> <li>Update the ui-dashboard-index to <code>https://releases.rancher.com/dashboard/latest/index.html</code></li> <li>Set <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh web page (ctrl + r)</li> <li>Open Create RKE2 cluster page</li> <li>Check the <code>show deprecated kubernetes patched versions</code> <img src="https://user-images.githubusercontent.com/29251855/180736528-feaa9615-ccf9-482b-9354-c2c9a6a4b23b.png" alt="image"></li> <li>Select <code>v1.23.8+rke2r1</code></li> <li>Finish the RKE2 cluster provision <img src="https://user-images.githubusercontent.com/29251855/180738516-3f429bba-22ab-4476-bebf-0ac2f87935c3.png" alt="image"></li> <li>Check the current cloud provider version in workload page <img src="https://user-images.githubusercontent.com/29251855/180738877-56afcd55-e519-48d9-a8b8-3cbed91a1dfb.png" alt="image"></li> <li>Edit RKE2 cluster, upgrade the kubernetes version to <code>1.23.9-rc3+rke2r1</code> <img src="https://user-images.githubusercontent.com/29251855/180739231-e61ef680-5a9d-480b-9ac9-eda7839e17b6.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/180739331-611b05d4-0c5d-4835-9da0-8c05b9cca027.png" alt="image"></li> <li>Wait for update finish <img src="https://user-images.githubusercontent.com/29251855/180739876-dc409fa8-a9a6-406b-a614-085cea57121f.png" alt="image"></li> <li>The cloud provider is upgrading <img src="https://user-images.githubusercontent.com/29251855/180740637-5d1c6ce0-07ed-4a62-a364-f1b5e9fe473f.png" alt="image"></li> <li>delete the old cloud provider version pod (v0.1.3) <img src="https://user-images.githubusercontent.com/29251855/180740767-e6d5cdc2-c004-4c7a-8298-690775265002.png" alt="image"></li> <li>Wait for newer version cloud provider have been bumped <img src="https://user-images.githubusercontent.com/29251855/180740875-38fa0cc0-c13a-4e39-ba46-5e869eadf087.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/180740998-80e451e5-ad91-4111-8abe-f51395427b9c.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>After upgrading the existing RKE2 guest cluster kubernetes version from older <code>v1.23.8+rke2r1</code> to <code>1.23.9-rc3+rke2r1</code>. The Harvester cloud provider can successfully updated from <code>v0.1.3</code> to <code>v0.1.4</code></p> Upgrade Harvester on node that has bonded NICs for management interface https://harvester.github.io/tests/manual/_incoming/3045-upgrade-with-bonded-nic/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/3045-upgrade-with-bonded-nic/ - Related issues: #3045 [BUG] Harvester Upgrade 1.0.3 to 1.1.0 does not handle multiple SLAVE in BOND for management interface Category: Upgrade Environment Setup This is to be done on a Harvester cluster where the NICs were configured to be bonded on install for the management interface. This can be done in one of two ways. Single node virtualized environment Bare metal environment with at least two NICs (this should really be done on 10gig NICs, but can be done on gigabit) Both NICs should be on the same VLAN/network with the same subnet + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3045">#3045</a> [BUG] Harvester Upgrade 1.0.3 to 1.1.0 does not handle multiple SLAVE in BOND for management interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li> <p>This is to be done on a Harvester cluster where the NICs were configured to be bonded on install for the management interface. This can be done in one of two ways.</p> <ul> <li>Single node virtualized environment</li> <li>Bare metal environment with at least two NICs (this should really be done on 10gig NICs, but can be done on gigabit)</li> </ul> </li> <li> <p>Both NICs should be on the same VLAN/network with the same subnet</p> Upgrade support of audit and event log https://harvester.github.io/tests/manual/_incoming/2750-support-audit-event-log/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2750-support-audit-event-log/ - Related issues: #2750 [FEATURE] Upgrade support of audit and event log Category: Logging Audit Verification Steps Prepare v1.0.3 cluster, single-node and multi-node need to be tested separately Upgrade to v1.1.0-rc2 / master-head The upgrade should be successful, if not, check log and POD errors After upgrade, check following PODs and files, there should be no error Expected Results Check both Single and Multi nodes upgrade of the following: Check the following files and pods have no error + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2750">#2750</a> [FEATURE] Upgrade support of audit and event log</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Logging Audit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare v1.0.3 cluster, single-node and multi-node need to be tested separately</li> <li>Upgrade to v1.1.0-rc2 / master-head</li> <li>The upgrade should be successful, if not, check log and POD errors</li> <li>After upgrade, check following PODs and files, there should be no error</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Check both Single and Multi nodes upgrade of the following:</p> VLAN Upgrade Test https://harvester.github.io/tests/manual/_incoming/2734-vlan-upgrade-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2734-vlan-upgrade-test/ - Related issues: #2734 [FEATURE] VLAN enhancement upgrading Category: Upgrade Verification Steps Test plan 1: harvester-mgmt vlan1 Prepare a 3 nodes v1.0.3 Harvester cluster Enable network on harvester-mgmt Create vlan id 1 Create two VMs, one set to vlan 1 and another use harvester-mgmt Perform manual upgrade to v1.1.0 Test plan 2: enps0 NIC with valid vlan Prepare a 3 nodes v1.0.3 Harvester cluster Enable network on another NIC (eg. enp129s0) Create vlan id 91 on enp129s0 Create two VMs, one set to vlan 91 and another use harvester-mgmt Perform manual upgrade to v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2734">#2734</a> [FEATURE] VLAN enhancement upgrading</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <h3 id="test-plan-1-harvester-mgmt-vlan1">Test plan 1: harvester-mgmt vlan1</h3> <ol> <li>Prepare a 3 nodes <code>v1.0.3</code> Harvester cluster</li> <li>Enable network on <code>harvester-mgmt</code></li> <li>Create vlan id <code>1</code></li> <li>Create two VMs, one set to vlan 1 and another use harvester-mgmt</li> <li>Perform manual upgrade to <code>v1.1.0</code></li> </ol> <h3 id="test-plan-2--enps0-nic-with-valid-vlan">Test plan 2: enps0 NIC with valid vlan</h3> <ol> <li>Prepare a 3 nodes <code>v1.0.3</code> Harvester cluster</li> <li>Enable network on another NIC (eg. <code>enp129s0</code>)</li> <li>Create vlan id <code>91</code> on <code>enp129s0</code></li> <li>Create two VMs, one set to vlan 91 and another use harvester-mgmt</li> <li>Perform manual upgrade to <code>v1.1.0</code></li> </ol> <h3 id="test-plan-3-bond-mode-using-harvester-config-file">Test plan 3: Bond mode using Harvester config file</h3> <ol> <li>Edit the ipxe-example add two additional NICs in Vagrantfile <pre tabindex="0"><code>harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; </code></pr VM boot stress test https://harvester.github.io/tests/manual/_incoming/2906-vm-boot-stress-test-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2906-vm-boot-stress-test-/ - Related issues: #2906 [BUG] VM can’t boot due to filesystem corruption Category: Volume Verification Steps Create volume (Harvester, Longhorn storage class) Create volume from image Unmount volume from VM Delete volume in use and not in use Export volume to image Create VM from the exported image Edit volume to increase size Delete volume in use Clone volume Take volume snapshot Restore volume snapshot Utilize the E2E test in harvester/test repo to prepare a script to continues run step 1-11 at lease 100 runs Expected Results Pass more than 300 rounds of the I/O write test, Should Not encounter data corruption issue and VM is alive opensuse:~ # xfs_info /dev/vda3 meta-data=/dev/vda3 isize=512 agcount=13, agsize=653887 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0 = reflink=0 data = bsize=4096 blocks=7858427, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2906">#2906</a> [BUG] VM can’t boot due to filesystem corruption</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create volume (Harvester, Longhorn storage class)</li> <li>Create volume from image</li> <li>Unmount volume from VM</li> <li>Delete volume in use and not in use</li> <li>Export volume to image</li> <li>Create VM from the exported image</li> <li>Edit volume to increase size</li> <li>Delete volume in use</li> <li>Clone volume</li> <li>Take volume snapshot</li> <li>Restore volume snapshot</li> <li>Utilize the E2E test in harvester/test repo to prepare a script to continues run step 1-11 at lease 100 runs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Pass more than 300 rounds of the I/O write test, <strong>Should Not</strong> encounter data corruption issue and VM is alive <pre tabindex="0"><code>opensuse:~ # xfs_info /dev/vda3 meta-data=/dev/vda3 isize=512 agcount=13, agsize=653887 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0 = reflink=0 data = bsize=4096 blocks=7858427, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 </code></pr VM Import/Migration https://harvester.github.io/tests/manual/_incoming/2274-vm-import/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2274-vm-import/ - Related issues: #2274 [Feature] VM Import/Migration Category: Virtual Machine Test Information Test Environment: 1 node harvester on local kvm machine Harvester version: v1.1.0-rc1 Vsphere: 7.0 Openstack: Simulated using running devstack Download kubeconfig for harvester cluster Environment Setup Prepare Harvester master node Prepare vsphere setup (or use existing setup) Prepare a devstack cluster (Openstack 16.2) (stable/train) OpenStack Setup Prepare a baremetal or virtual machine to host the OpenStack service For automated installation on virtual machine, please refer to the cloud init user data in https://github. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2274">#2274</a> [Feature] VM Import/Migration</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="test-information">Test Information</h2> <p>Test Environment:</p> <ul> <li>1 node harvester on local kvm machine</li> <li>Harvester version: v1.1.0-rc1</li> <li>Vsphere: 7.0</li> <li>Openstack: Simulated using running devstack</li> <li>Download kubeconfig for harvester cluster</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ol> <li>Prepare Harvester master node</li> <li>Prepare vsphere setup (or use existing setup)</li> <li>Prepare a devstack cluster (Openstack 16.2) (stable/train)</li> </ol> <h3 id="openstack-setup">OpenStack Setup</h3> <ol> <li>Prepare a baremetal or virtual machine to host the OpenStack service</li> <li>For automated installation on virtual machine, please refer to the <code>cloud init user data</code> in <a href="https://github.com/harvester/tests/issues/522#issuecomment-1654646620">https://github.com/harvester/tests/issues/522#issuecomment-1654646620</a></li> <li>For manual installation, we can also follow the command in the <code>cloud init user data</code></li> </ol> <h3 id="openstack-troubleshooting">OpenStack troubleshooting</h3> <p>If you failed create volume with the following error message <code>Error: Failed to perform requested operation on instance &quot;opensuse&quot;, the instance has an error status: Please try again later [Error: Build of instance 289d8c95-fd99-42a4-8eab-3a522e891463 aborted: Invalid input received: Invalid image identifier or unable to access requested image. (HTTP 400) (Request-ID: req-248baac7-a2de-4c51-9817-de653a548e3b)].</code></p> VM IP addresses should be labeled per network interface https://harvester.github.io/tests/manual/_incoming/2032-2370-vm-ip-lableled-per-network-interface/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2032-2370-vm-ip-lableled-per-network-interface/ - Related issues: #2032 [BUG] VM IP addresses should be labeled per network interface Related issues: #2370 [backport v1.0.3] VM IP addresses should be labeled per network interface Category: Virtual Machine Verification Steps Enable network with magement-mgmt interface Create vlan network vlan1 with id 1 Check the IP address on the VM page Create a VM with harvester-mgmt network Import Harvester in Rancher Provision a RKE2 cluster from Rancher Check the IP address on the VM page Expected Results Now the VM list only show IP which related to user access. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2032">#2032</a> [BUG] VM IP addresses should be labeled per network interface</li> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2370">#2370</a> [backport v1.0.3] VM IP addresses should be labeled per network interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable network with magement-mgmt interface</li> <li>Create vlan network <code>vlan1</code> with id <code>1</code></li> <li>Check the IP address on the VM page</li> <li>Create a VM with <code>harvester-mgmt</code> network</li> <li>Import Harvester in Rancher</li> <li>Provision a RKE2 cluster from Rancher</li> <li>Check the IP address on the VM page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Now the VM list only show IP which related to user access.</li> <li>And provide hover message on each displayed IP address <img src="https://user-images.githubusercontent.com/29251855/173749441-06fdad41-147a-4703-b19f-eafb1af9f18d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/173750324-9f26bcd2-024c-428f-a8bd-2a564c6078f2.png" alt="image"></li> </ul> VM label names consistentency before and after the restore https://harvester.github.io/tests/manual/_incoming/2662-vm-label-names-consistentency-after-the-restore/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2662-vm-label-names-consistentency-after-the-restore/ - Related issues: #2662 [BUG] VM label names should be consistent before and after the restore task is done Category: Network Verification Steps Create a VM named ubuntu Check the label name in virtual machine yaml content, label marked with harvesterhci.io/vmName Setup the S3 backup target Take a S3 backup with name After the backup task is done, delete the current VM Restore VM from the backup with the same name ubuntu (Create New) Check the yaml content after VM fully operated Expected Results The vm lable name is consistent to display harvesterhci. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2662">#2662</a> [BUG] VM label names should be consistent before and after the restore task is done</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a VM named <code>ubuntu</code></li> <li>Check the label name in virtual machine yaml content, label marked with <code>harvesterhci.io/vmName</code> <img src="https://user-images.githubusercontent.com/29251855/188374691-b36db1bc-2e2e-447b-96e1-699aa5e0ffee.png" alt="image"></li> <li>Setup the S3 backup target</li> <li>Take a S3 backup with name</li> <li>After the backup task is done, delete the current VM</li> <li>Restore VM from the backup with the same name <code>ubuntu</code> (Create New) <img src="https://user-images.githubusercontent.com/29251855/188378123-9af171af-c992-4e78-bdbb-8627903502ff.png" alt="image"></li> <li>Check the yaml content after VM fully operated</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The vm lable name is consistent to display <code>harvesterhci.io/vmName</code> after restore from the backup.</p> VM Snapshot support https://harvester.github.io/tests/manual/_incoming/553_vm_snapshot_support/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/553_vm_snapshot_support/ - Ref: https://github.com/harvester/harvester/issues/553 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm1 with the image and an additional data volume disk-1 Login to vm1, execute following commands: fdisk /dev/vdb with new and primary partition mkfs.ext4 /dev/vdb1 mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb ping 127.0.0.1 | tee -a test vdb/test Navigate to Virtual Machines page, click Take Snapshot button on vm1&rsquo;s details, named vm1s1 Execute sync on vm1 and Take Snapshot named vm1s2 Interrupt ping. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/553">https://github.com/harvester/harvester/issues/553</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create <code>vm1</code> with the image and an additional data volume <code>disk-1</code></li> <li>Login to <code>vm1</code>, execute following commands: <ul> <li><code>fdisk /dev/vdb</code> with new and primary partition</li> <li><code>mkfs.ext4 /dev/vdb1</code></li> <li><code>mkdir vdb &amp;&amp; mount -t ext4 /dev/vdb1 vdb</code></li> <li><code>ping 127.0.0.1 | tee -a test vdb/test</code></li> </ul> </li> <li>Navigate to <em>Virtual Machines</em> page, click <strong>Take Snapshot</strong> button on <code>vm1</code>&rsquo;s details, named <code>vm1s1</code></li> <li>Execute <code>sync</code> on <code>vm1</code> and <strong>Take Snapshot</strong> named <code>vm1s2</code></li> <li>Interrupt <code>ping...</code> command and <code>rm test &amp;&amp; sync</code>, then <strong>Take Snapshot</strong> named <code>vm1s3</code></li> <li>Restore 3 snapshots into <strong>New</strong> VM: <code>vm1s1r</code>, <code>vm1s2r</code> and <code>vm1s3r</code></li> <li>Content of <code>test</code> and <code>vdb/test</code> should be the same in VM, and different in other restored VMs.</li> <li>Restore snapshots with <strong>Replace Existing</strong></li> <li>Content of <code>test</code> and <code>vdb/test</code> in restored <code>vm1</code> from the snapshot, should be the same as the VM restored with the same snapshot.</li> </ol> VM template is not working with Node scheduling https://harvester.github.io/tests/manual/_incoming/2244_vm_template_is_not_working_with_node_scheduling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2244_vm_template_is_not_working_with_node_scheduling/ - Ref: https://github.com/harvester/harvester/issues/2244 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create VM with Multiple Instance and Use VM Template, In Node Scheduling tab, select Run VM on specific node(s) Created VMs should be scheduled on the specific node + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2244">https://github.com/harvester/harvester/issues/2244</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/177742575-31730953-5ffd-4018-b5ce-1b1e487ee14c.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create VM with <em><strong>Multiple Instance</strong></em> and <em><strong>Use VM Template</strong></em>, In <strong>Node Scheduling</strong> tab, select <code>Run VM on specific node(s)</code></li> <li>Created VMs should be scheduled on the specific node</li> </ol> VMIs created from VM Template don't have LiveMigrate evictionStrategy set https://harvester.github.io/tests/manual/_incoming/2357_vmis_created_from_vm_template_do_nott_have_livemigrate_evictionstrategy_set/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2357_vmis_created_from_vm_template_do_nott_have_livemigrate_evictionstrategy_set/ - Ref: https://github.com/harvester/harvester/issues/2357 Verify Steps: Install Harvester with at least 2 nodes Create Image for VM Creation Navigate to Advanced/Templates and create a template t1 Create VM vm1 from template t1 Edit YAML of vm1, field spec.template.spec.evictionStrategy should be LiveMigrate Enable Maintenance Mode on the host which hosting vm1 vm1 should start migrating automatically Migration should success + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2357">https://github.com/harvester/harvester/issues/2357</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create Image for VM Creation</li> <li>Navigate to <em>Advanced/Templates</em> and create a template <code>t1</code></li> <li>Create VM <code>vm1</code> from template <code>t1</code></li> <li>Edit YAML of <code>vm1</code>, field <code>spec.template.spec.evictionStrategy</code> should be <code>LiveMigrate</code></li> <li>Enable Maintenance Mode on the host which hosting <code>vm1</code></li> <li><code>vm1</code> should start migrating automatically</li> <li>Migration should success</li> </ol> VMs can't start if a node contains more than ~60 VMs https://harvester.github.io/tests/manual/_incoming/2722_vms_can_not_start_if_a_node_contains_more_than_60_vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2722_vms_can_not_start_if_a_node_contains_more_than_60_vms/ - Ref: https://github.com/harvester/harvester/issues/2722 Verify Steps: Install Harvester with any nodes Login to console, execute sysctl -a | grep aio, the value of fs.aio-max-nr should be 1048576 Update the value by executing: mkdir -p /usr/local/lib/sysctl.d/ cat &gt; /usr/local/lib/sysctl.d/harvester.conf &lt;&lt;EOF fs.aio-max-nr = 61440 EOF sysctl --system Execute sysctl -a | grep aio, the value of fs.aio-max-nr should be 61440 Reboot the node then execute sysctl -a | grep aio, the value of fs.aio-max-nr should still be 61440 Create an image for VM creation Create 60 VMs and schedule on the node which updated fs. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2722">https://github.com/harvester/harvester/issues/2722</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192251104-7a53a1a9-260d-4e90-aade-1b3e7c11cc52.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Login to console, execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should be <code>1048576</code></li> <li>Update the value by executing:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>mkdir -p /usr/local/lib/sysctl.d/ </span></span><span style="display:flex;"><span>cat &gt; /usr/local/lib/sysctl.d/harvester.conf <span style="color:#e6db74">&lt;&lt;EOF </span></span></span><span style="display:flex;"><span><span style="color:#e6db74">fs.aio-max-nr = 61440 </span></span></span><span style="display:flex;"><span><span style="color:#e6db74">EOF</span> </span></span><span style="display:flex;"><span>sysctl --system </span></span></code></pre></div><ol> <li>Execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should be <code>61440</code></li> <li>Reboot the node then execute <code>sysctl -a | grep aio</code>, the value of <code>fs.aio-max-nr</code> should still be <code>61440</code></li> <li>Create an image for VM creation</li> <li>Create 60 VMs and schedule on the node which updated <code>fs.aio-max-nr</code></li> <li>Update <code>fs.aio-max-nr</code> to <code>1048576</code> in <code>/usr/local/lib/sysctl.d/harvester.conf</code> and execute <code>sysctl --system</code></li> <li>VMs should started successfully or Stopping with error message <code>Too many pods</code></li> </ol> VolumeSnapshot Management https://harvester.github.io/tests/manual/_incoming/2296_volumesnapshot_management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2296_volumesnapshot_management/ - Ref: https://github.com/harvester/harvester/issues/2296 Verify Steps: Install Harvester with any nodes Create an Image for VM creation Create vm vm1 and start it *Take Snapshot on vm1 named vm1s1 Navigate to Volumes, click disks of vm1 then move to Snapshots tab, volume of snapshot vm1s1 should not displayed Navigate to Advanced/Volume Snapshots, volumes of snapshot vm1s1 should not displayed Navigate to Advanced/VM Snapshots, snapshot vm1s1 should displayed + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2296">https://github.com/harvester/harvester/issues/2296</a></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester with any nodes</li> <li>Create an Image for VM creation</li> <li>Create vm <code>vm1</code> and start it</li> <li>*<em>Take Snapshot</em> on <code>vm1</code> named <code>vm1s1</code></li> <li>Navigate to <em>Volumes</em>, click disks of <code>vm1</code> then move to <strong>Snapshots</strong> tab, volume of snapshot <code>vm1s1</code> should not displayed</li> <li>Navigate to <em>Advanced/Volume Snapshots</em>, volumes of snapshot <code>vm1s1</code> should not displayed</li> <li>Navigate to <em>Advanced/VM Snapshots</em>, snapshot <code>vm1s1</code> should displayed</li> </ol> Wrong mgmt bond MTU size during initial ISO installation https://harvester.github.io/tests/manual/_incoming/2437_wrong_mgmt_bond_mtu_size_during_initial_iso_installation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/2437_wrong_mgmt_bond_mtu_size_during_initial_iso_installation/ - Ref: https://github.com/harvester/harvester/issues/2437 Verify Steps: Install Harvester via ISO and configure IPv4 Method with static Inputbox MTU (Optional) should be available and optional Configured MTU should reflect to the port&rsquo;s MTU after installation + <p>Ref: <a href="https://github.com/harvester/harvester/issues/2437">https://github.com/harvester/harvester/issues/2437</a></p> <p><img src="https://user-images.githubusercontent.com/5169694/192757588-73484301-07e7-4a37-9d1e-cbcada9b5774.png" alt="image"> <img src="https://user-images.githubusercontent.com/5169694/192758868-422887df-557c-4d8c-9ee8-2ab0f863f97a.png" alt="image"></p> <h3 id="verify-steps">Verify Steps:</h3> <ol> <li>Install Harvester via ISO and configure <strong>IPv4 Method</strong> with <em>static</em></li> <li>Inputbox <code>MTU (Optional)</code> should be available and optional</li> <li>Configured MTU should reflect to the port&rsquo;s MTU after installation</li> </ol> Zero downtime upgrade https://harvester.github.io/tests/manual/_incoming/1707-zero-downtime-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/_incoming/1707-zero-downtime-upgrade/ - Related issues: #1707 [BUG] Zero downtime upgrade stuck in &ldquo;Waiting for VM live-migration or shutdown&hellip;&rdquo; Category: Upgrade Verification Steps Create a ubuntu image from URL Enable Network with management-mgmt Create a virtual network vlan1 with id 1 Setup backup target Create a VM backup Follow the guide to do upgrade test Expected Results Can upgrade correctly with all VMs remain in running + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1707">#1707</a> [BUG] Zero downtime upgrade stuck in &ldquo;Waiting for VM live-migration or shutdown&hellip;&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a ubuntu image from URL</li> <li>Enable Network with management-mgmt</li> <li>Create a virtual network vlan1 with id 1</li> <li>Setup backup target</li> <li>Create a VM backup</li> <li>Follow the <a href="https://github.com/harvester/docs/blob/main/docs/upgrade/automatic.md">guide</a> to do upgrade test <img src="https://user-images.githubusercontent.com/29251855/166428121-391f5321-ec8e-46ce-9a96-ea92f04b3907.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/166429966-b08cea0e-c457-41b2-a647-b6d3ac00aa58.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can upgrade correctly with all VMs remain in running <img src="https://user-images.githubusercontent.com/29251855/166430303-376d9e30-bf92-49eb-b3e2-8eeeb2375702.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/166430680-bb9e14fe-7da5-4b73-9ec8-47a780b4914c.png" alt="image"></li> </ol> diff --git a/manual/advanced/addons/index.xml b/manual/advanced/addons/index.xml index da0f6d458..72c6dd16d 100644 --- a/manual/advanced/addons/index.xml +++ b/manual/advanced/addons/index.xml @@ -12,21 +12,21 @@ https://harvester.github.io/tests/manual/advanced/addons/5337-enable-addons-check-deployment/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/5337-enable-addons-check-deployment/ - Related issues: #5337 [BUG] Failed to enable vm-importer, pcidevices and harvester-seeder controller addons, keep stuck in &ldquo;Enabling&rdquo; state Category: Addons Verification Steps Prepare three nodes Harvester cluster Open Advanced -&gt; Addons page Access to harvester node machine Switch to root user and open k9s Enable the vm-importer, pci-devices and harvester-seeder addons Check the corresponding jobs and logs Enable rest of the addons nvidia-driver-toolkit, rancher-monitoring and rancher-logging Expected Results Check the vm-importer, pci-devices and harvester-seeder display in Deployment Successful Check the vm-importer-controller, pci-devices-controller and harvester-seeder jobs and the related helm-install chart job all running well on the K9s Check the nvidia-driver-toolkit, rancher-monitoring and rancher-logging display in Deployment Successful Check the nvidia-driver-toolkit, rancher-monitoring and rancher-logging jobs and the related helm-install chart job all running well on the K9s + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5337">#5337</a> [BUG] Failed to enable vm-importer, pcidevices and harvester-seeder controller addons, keep stuck in &ldquo;Enabling&rdquo; state</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Addons</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Prepare three nodes Harvester cluster</li> <li>Open Advanced -&gt; <code>Addons</code> page</li> <li>Access to harvester node machine</li> <li>Switch to root user and open k9s</li> <li>Enable the <code>vm-importer</code>, <code>pci-devices</code> and <code>harvester-seeder</code> addons</li> <li>Check the corresponding jobs and logs</li> <li>Enable rest of the addons <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Check the <code>vm-importer</code>, <code>pci-devices</code> and <code>harvester-seeder</code> display in <code>Deployment Successful</code></li> <li>Check the <code>vm-importer-controller</code>, <code>pci-devices-controller</code> and <code>harvester-seeder</code> jobs and the related helm-install chart job all running well on the K9s</li> <li>Check the <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code> display in <code>Deployment Successful</code> <img src="https://harvester.github.io/tests/images/addons/5337-enable-all-addons.png" alt="images/addons/5337-enable-all-addons.png"></li> <li>Check the <code>nvidia-driver-toolkit</code>, <code>rancher-monitoring</code> and <code>rancher-logging</code> jobs and the related helm-install chart job all running well on the K9s</li> </ul> PCI Devices Controller https://harvester.github.io/tests/manual/advanced/addons/pci-devices-controller/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/pci-devices-controller/ - Pre-requisite Enable PCI devices Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC Go to the management interface of the new cluster Go to Advanced -&gt; PCI Devices Validate that the PCI devices aren&rsquo;t enabled Click the link to enable PCI devices Enable PCI devices in the linked addon page Wait for the status to change to Deploy successful Navigate to the PCI devices page Validate that the PCI devices page is populated/populating with PCI devices Case 1 (PCI NIC passthrough) Create a harvester cluster in bare metal mode. + <h2 id="pre-requisite-enable-pci-devices">Pre-requisite Enable PCI devices</h2> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Validate that the PCI devices aren&rsquo;t enabled</li> <li>Click the link to enable PCI devices</li> <li>Enable PCI devices in the linked addon page</li> <li>Wait for the status to change to Deploy successful</li> <li>Navigate to the PCI devices page</li> <li>Validate that the PCI devices page is populated/populating with PCI devices</li> </ol> <h2 id="case-1-pci-nic-passthrough">Case 1 (PCI NIC passthrough)</h2> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Check the box representing the PCI NIC device (identify it by the Description or the VendorId/DeviceId combination)</li> <li>Click Enable Passthrough</li> <li>When the NIC device is in an Enabled state, create a VM</li> <li>After creating the VM, edit the Config</li> <li>In the &ldquo;PCI Devices&rdquo; section, click the &ldquo;Available PCI Devices&rdquo; dropdown</li> <li>Select the PCI NIC device that has been enabled for passthrough</li> <li>Click Save</li> <li>Start the VM</li> <li>Once the VM is booted, run <code>lspci</code> at the command line (make sure the VM has the <code>pciutils</code> package) and verify that the PCI NIC device shows up</li> <li>(Optional) Install the driver for your PCI NIC device (if it hasn&rsquo;t been autoloaded)</li> </ol> <h3 id="case-1-dependencies">Case 1 dependencies:</h3> <ul> <li>PCI NIC separate from management network</li> <li>Enable PCI devices</li> </ul> <h2 id="case-2-gpu-passthrough">Case 2 (GPU passthrough)</h2> <h3 id="case-2-1-add-gpu">Case 2-1 Add GPU</h3> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has a GPU separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Check the box representing the GPU device (identify it by the Description or the VendorId/DeviceId combination)</li> <li>Click Enable Passthrough</li> <li>When the GPU device is in an Enabled state, create a VM</li> <li>After creating the VM, edit the Config</li> <li>In the &ldquo;PCI Devices&rdquo; section, click the &ldquo;Available PCI Devices&rdquo; dropdown</li> <li>Select the GPU device that has been enabled for passthrough</li> <li>Click Save</li> <li>Start the VM</li> <li>Once the VM is booted, run <code>lspci</code> at the command line (make sure the VM has the <code>pciutils</code> package) and verify that the GPU device shows up</li> <li>Install the driver for your GPU device <ol> <li>if the device is from NVIDIA: (this is for ubuntu, but the opensuse installation instructions are <a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#suse-installation">here</a>) <pre tabindex="0"><code>wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update sudo apt-get -y install cuda nvidia-cuda-toolkit build-essential </code></pr vGPU/SR-IOV GPU https://harvester.github.io/tests/manual/advanced/addons/2764-vgpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/addons/2764-vgpu/ - Related issues: #1661 vGPU Support Pre-requisite Enable PCI devices Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC Go to the management interface of the new cluster Go to Advanced -&gt; PCI Devices Validate that the PCI devices aren&rsquo;t enabled Click the link to enable PCI devices Enable PCI devices in the linked addon page Wait for the status to change to Deploy Successful Navigate to the PCI devices page Validate that the PCI devices page is populated/populating with PCI devices Pre-requisite Enable vGPU This can only be ran on a bare metal Harvester cluster that has an Nvidia card that support vGPU. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2764">#1661</a> vGPU Support</li> </ul> <h1 id="pre-requisite-enable-pci-devices">Pre-requisite Enable PCI devices</h1> <ol> <li>Create a harvester cluster in bare metal mode. Ensure one of the nodes has NIC separate from the management NIC</li> <li>Go to the management interface of the new cluster</li> <li>Go to Advanced -&gt; PCI Devices</li> <li>Validate that the PCI devices aren&rsquo;t enabled</li> <li>Click the link to enable PCI devices</li> <li>Enable PCI devices in the linked addon page</li> <li>Wait for the status to change to <code>Deploy Successful</code></li> <li>Navigate to the PCI devices page</li> <li>Validate that the PCI devices page is populated/populating with PCI devices</li> </ol> <h1 id="pre-requisite-enable-vgpu">Pre-requisite Enable vGPU</h1> <p>This can only be ran on a bare metal Harvester cluster that has an Nvidia card that support vGPU. You will also need the Nvidia KVM driver and the Nvidia grid installer. These can be downloaded from Nvidia through your partner portal as outlined <a href="https://www.nvidia.com/en-us/drivers/vgpu-software-driver/">here</a></p> diff --git a/manual/advanced/index.xml b/manual/advanced/index.xml index 3e3149511..fb5e1b64d 100644 --- a/manual/advanced/index.xml +++ b/manual/advanced/index.xml @@ -12,98 +12,98 @@ https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-bundled/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-bundled/ - Log in as admin Navigate to advanced settings Change api-ui-source to bundled Save Refresh page Check page source for dashboard loading location Expected Results Log in should complete Settings should save dashboard location should be loading from /dashboard/_nuxt/ (verify it in browser&rsquo;s developers tools) + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Change api-ui-source to bundled</li> <li>Save</li> <li>Refresh page</li> <li>Check page source for dashboard loading location</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Log in should complete</li> <li>Settings should save</li> <li>dashboard location should be loading from <code>/dashboard/_nuxt/</code> <ul> <li>(verify it in browser&rsquo;s developers tools)</li> </ul> </li> </ol> Change api-ui-source external (e2e_fe) https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-external/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/chage-api-ui-source-external/ - Log in as admin Navigate to advanced settings Change api-ui-source to external Save Refresh page Check page source for dashboard loading location Expected Results Log in should complete Settings should save dashboard location should be loading from https://releases.rancher.com/harvester-ui/latest (verify it in browser&rsquo;s developers tools) + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Change api-ui-source to external</li> <li>Save</li> <li>Refresh page</li> <li>Check page source for dashboard loading location</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Log in should complete</li> <li>Settings should save</li> <li>dashboard location should be loading from <a href="https://releases.rancher.com/harvester-ui/latest">https://releases.rancher.com/harvester-ui/latest</a></li> <li>(verify it in browser&rsquo;s developers tools)</li> </ol> Change log level debug https://harvester.github.io/tests/manual/advanced/change-log-level-debug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-debug/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Debug Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Debug level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Debug</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Debug level output</li> </ol> Change log level Info (e2e_fe) https://harvester.github.io/tests/manual/advanced/change-log-level-info/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-info/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Info Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Info level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Info</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Info level output</li> </ol> Change log level Trace (e2e_fe) https://harvester.github.io/tests/manual/advanced/change-log-level-trace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/change-log-level-trace/ - Log in as admin Navigate to advanced settings Edit config on log-level Choose Trace Save Create two VMs Reboot both VMs Download Logs Expected Results Login should complete Settings should save VMs should create VMs should reboot sucessfully Logs should show Trace level output + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on log-level</li> <li>Choose Trace</li> <li>Save</li> <li>Create two VMs</li> <li>Reboot both VMs</li> <li>Download Logs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should complete</li> <li>Settings should save</li> <li>VMs should create</li> <li>VMs should reboot sucessfully</li> <li>Logs should show Trace level output</li> </ol> Cluster TLS customization https://harvester.github.io/tests/manual/advanced/tls_customize/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/tls_customize/ - Ref: https://github.com/harvester/harvester/issues/1046 Verify Items Cluster&rsquo;s SSL/TLS parameters could be configured in install option Cluster&rsquo;s SSL/TLS parameters could be updated in dashboard Case: Configure TLS parameters in dashboard Install Harvester with any nodes Navigate to Advanced Settings, then edit ssl-parameters Select Protocols TLSv1.3, then save execute command echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 | grep &quot;Cipher is&quot; Output should contain error...SSL routines... and Cipher is (NONE) execute command echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_3 | grep &quot;Cipher is&quot; Output should contain Cipher is &lt;one_of_TLS1_3_Ciphers&gt;1 and should not contain error. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1046">https://github.com/harvester/harvester/issues/1046</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Cluster&rsquo;s SSL/TLS parameters could be configured in install option</li> <li>Cluster&rsquo;s SSL/TLS parameters could be updated in dashboard</li> </ul> <h2 id="case-configure-tls-parameters-in-dashboard">Case: Configure TLS parameters in dashboard</h2> <ol> <li>Install Harvester with any nodes</li> <li>Navigate to Advanced Settings, then edit <code>ssl-parameters</code></li> <li>Select <strong>Protocols</strong> <code>TLSv1.3</code>, then save</li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>error...SSL routines...</code> and <code>Cipher is (NONE)</code></li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_3 | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>Cipher is &lt;one_of_TLS1_3_Ciphers&gt;</code><sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> and should not contain <code>error...SSL...</code></li> <li>repeat Step 2, then select <strong>Protocols</strong> to <code>TLSv1.2</code> only, and input <strong>Ciphers</strong> <code>ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256</code></li> <li>execute command <code>echo QUIT | openssl s_client -connect &lt;VIP&gt;:443 -tls1_2 -cipher 'ECDHE-ECDSA-AES256-GCM-SHA384' | grep &quot;Cipher is&quot;</code></li> <li>Output should contain <code>error...SSL routines...</code> and <code>Cipher is (NONE)</code></li> </ol> <h2 id="case-configure-tls-parameters-in-install-configuration">Case: Configure TLS parameters in install configuration</h2> <ol> <li>Install harvester with PXE installation, set <code>ssl-parameters</code> in <code>system_settings</code> (see the <em>example</em> for more details)</li> <li>Harvester should be installed successfully</li> <li>Dashboard&rsquo;s <strong>ssl-parameters</strong> should be configured as expected</li> </ol> <pre tabindex="0"><code># example for ssl-parameters configure option system_settings: ssl-parameters: | { &#34;protocols&#34;: &#34;TLSv1.3&#34;, &#34;ciphers&#34;: &#34;TLS-AES-128-GCM-SHA256:TLS-AES-128-CCM-8-SHA256&#34; } </code></pr Fleet support with Harvester https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ - Fleet Support Pathways Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function Fleet Support can be used from within Rancher w/ Harvester Fleet Support w/ Rancher Prerequisites Harvester cluster is imported into Rancher. Rancher Feature Flag harvester-baremetal-container-workload is enabled. Harvester cluster is available to view via the Explore Cluster section of Rancher. Explore the Harvester cluster: Toggle &ldquo;All Namespaces&rdquo; to be selected Search for &amp; &ldquo;star&rdquo; (marking favorite for ease of navigation): Git Repo Git Job Git Restrictions Fleet Support w/out Rancher Prerequisites An active Harvester Cluster Kubeconfig Additional Prerequisites Fork ibrokethecloud&rsquo;s Harvester Fleet Demo into your own personal GitHub Repository Take a look at the different Harvester API Resources as YAML will be scaffolded to reflect those objects respectively Additional Prerequisites Airgapped, if desired Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of ibrokethecloud&rsquo;s Harvester Fleet Demo (setting up AirGapped GitLab Server is outside of this scope) Additional Prerequisites (Private Repository Testing), if desired Private Git Repo Key, will need to be added to -n fleet-local namespace Build a private GitHub Repo Add similar content to what ibrokethecloud&rsquo;s Harvester Fleet Demo holds but take into consideration the following ( references: GitRepo CRD &amp; Rancher Fleet Private Git Repo Blurb ): building a &ldquo;separate&rdquo; SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like: ssh-keygen -t rsa -b 4096 -m pem -C &#34;testing-test-key-for-private-repo-deploy-key@email. + <h2 id="fleet-support-pathways">Fleet Support Pathways</h2> <ol> <li>Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function</li> <li>Fleet Support can be used from within Rancher w/ Harvester</li> </ol> <h3 id="fleet-support-w-rancher-prerequisites">Fleet Support w/ Rancher Prerequisites</h3> <ol> <li>Harvester cluster is imported into Rancher.</li> <li>Rancher Feature Flag <code>harvester-baremetal-container-workload</code> is enabled.</li> <li>Harvester cluster is available to view via the Explore Cluster section of Rancher.</li> <li>Explore the Harvester cluster: <ol> <li>Toggle &ldquo;All Namespaces&rdquo; to be selected</li> <li>Search for &amp; &ldquo;star&rdquo; (marking favorite for ease of navigation): <ul> <li>Git Repo</li> <li>Git Job</li> <li>Git Restrictions</li> </ul> </li> </ol> </li> </ol> <h3 id="fleet-support-wout-rancher-prerequisites">Fleet Support w/out Rancher Prerequisites</h3> <ol> <li>An active Harvester Cluster Kubeconfig</li> </ol> <h3 id="additional-prerequisites">Additional Prerequisites</h3> <ol> <li>Fork <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> into your own personal GitHub Repository</li> <li>Take a look at the different <a href="https://docs.harvesterhci.io/v1.2/category/api">Harvester API Resources</a> as YAML will be scaffolded to reflect those objects respectively</li> </ol> <h3 id="additional-prerequisites-airgapped-if-desired">Additional Prerequisites Airgapped, if desired</h3> <ol> <li>Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> (setting up AirGapped GitLab Server is outside of this scope)</li> </ol> <h3 id="additional-prerequisites-private-repository-testing-if-desired">Additional Prerequisites (Private Repository Testing), if desired</h3> <ol> <li><a href="https://fleet.rancher.io/gitrepo-add#adding-private-git-repository">Private Git Repo Key</a>, will need to be added to <code>-n fleet-local</code> namespace</li> <li>Build a private GitHub Repo</li> <li>Add similar content to what <a href="https://github.com/ibrokethecloud/harvester-fleet-demo/">ibrokethecloud&rsquo;s Harvester Fleet Demo</a> holds but take into consideration the following ( references: <a href="https://fleet.rancher.io/ref-gitrepo">GitRepo CRD</a> &amp; <a href="https://fleet.rancher.io/gitrepo-add#adding-private-git-repository">Rancher Fleet Private Git Repo Blurb</a> ): <ol> <li>building a &ldquo;separate&rdquo; SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like:</li> </ol> <pre tabindex="0"><code> ssh-keygen -t rsa -b 4096 -m pem -C &#34;testing-test-key-for-private-repo-deploy-key@email.com&#34; Generating public/private rsa key pair. Enter file in which to save the key (/home/mike/.ssh/id_rsa): /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing Your public key has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing.pub </code></pr Set backup target S3 (e2e_fe) https://harvester.github.io/tests/manual/advanced/set-s3-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/set-s3-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose S3 Set valid S3 target Save Expected Results login should complete Settings should save You should not get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose S3</li> <li>Set valid S3 target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should not get an error message</li> </ol> Set backup-target NFS (e2e_fe) https://harvester.github.io/tests/manual/advanced/set-nfs-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/set-nfs-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose NFS Set valid NFS target Save Expected Results login should complete Settings should save You should not get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose NFS</li> <li>Set valid NFS target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should not get an error message</li> </ol> Set backup-target NFS invalid target https://harvester.github.io/tests/manual/advanced/negative-set-invalid-nfs-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/negative-set-invalid-nfs-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose NFS Set invalid NFS target Save Expected Results login should complete Settings should save You should get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose NFS</li> <li>Set invalid NFS target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should get an error message</li> </ol> Set backup-target S3 invalid target https://harvester.github.io/tests/manual/advanced/negative-set-invalid-s3-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/negative-set-invalid-s3-backup-target/ - Log in as admin Navigate to advanced settings Edit config on backup-target Choose S3 Set invalid S3 target Save Expected Results login should complete Settings should save You should get an error message + <ol> <li>Log in as admin</li> <li>Navigate to advanced settings</li> <li>Edit config on backup-target</li> <li>Choose S3</li> <li>Set invalid S3 target</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>login should complete</li> <li>Settings should save</li> <li>You should get an error message</li> </ol> SSL Certificate https://harvester.github.io/tests/manual/advanced/ssl-certificate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/ssl-certificate/ - Ref: https://github.com/harvester/harvester/issues/761 Verify Items generated kubeconfig is able to access kubenetes API new node able to join the cluster using the configured Domain Name create node with ssl-certificates settings is working as expected. Case: Kubeconfig Install Harvester with at least 2 nodes Generate self-signed TLS certificates from https://www.selfsignedcertificate.com/ with specific name Navigate to advanced settings, edit ssl-certificates settings Update generated .cert file to CA and Public Certificate, .key file to Private Key Relogin with domain name Navigate to Support page, then Click Download KubeConfig, file should named local. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/761">https://github.com/harvester/harvester/issues/761</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>generated kubeconfig is able to access kubenetes API</li> <li>new node able to join the cluster using the configured Domain Name</li> <li>create node with ssl-certificates settings is working as expected.</li> </ul> <h3 id="case-kubeconfig">Case: Kubeconfig</h3> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Generate self-signed TLS certificates from <a href="https://www.selfsignedcertificate.com/">https://www.selfsignedcertificate.com/</a> with specific name</li> <li>Navigate to advanced settings, edit <code>ssl-certificates</code> settings</li> <li>Update generated <code>.cert</code> file to <em>CA</em> and <em>Public Certificate</em>, <code>.key</code> file to <em>Private Key</em></li> <li>Relogin with domain name</li> <li>Navigate to Support page, then Click <strong>Download KubeConfig</strong>, file should named <code>local.yaml</code></li> <li>Kubernetes API should able to be accessed with config <code>local.yaml</code> (follow one of the <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/">instruction</a> for testing)</li> </ol> <h3 id="case-host-joining-with-https-and-domain-name">Case: Host joining with https and Domain Name</h3> <ol> <li>Install Harvester with single node</li> <li>Generate self-signed TLS certificates from <a href="https://www.selfsignedcertificate.com/">https://www.selfsignedcertificate.com/</a> with specific name</li> <li>Navigate to advanced settings, edit <code>ssl-certificates</code> settings</li> <li>Update generated <code>.cert</code> file to <em>CA</em> and <em>Public Certificate</em>, <code>.key</code> file to <em>Private Key</em></li> <li>Install another Harvester Host as a joining node via PXE installation <ul> <li>the <code>server_url</code> MUST be configured as the specific domain name</li> <li>Be aware set <code>os.dns_nameservers</code> to make sure the domain name is reachable.</li> </ul> </li> <li>The joining node should joined to the cluster successfully.</li> </ol> <h3 id="case-host-creating-with-ssl-certificates">Case: Host creating with SSL certificates</h3> <ol> <li>Install Harvester with single node via PXE installation <ul> <li>fill in <code>system_settings.ssl-certificates</code> as the format in <a href="https://github.com/harvester/harvester/issues/761#issuecomment-993060101">https://github.com/harvester/harvester/issues/761#issuecomment-993060101</a></li> </ul> </li> <li>Dashboard should able to be accessed via VIP and domain name</li> </ol> Timeout option for support bundle https://harvester.github.io/tests/manual/advanced/support_bundle_timeout/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/support_bundle_timeout/ - Ref: https://github.com/harvester/harvester/issues/1585 Verify Items An Timeout Option can be configured for support bundle Error message will display when reach timeout Case: Generate support bundle but hit timeout Install Harvester with at least 2 nodes Navigate to Advanced Settings, modify support-bundle-timeout to 2 Navigate to Support, Click Generate Support Bundle, and force shut down one of the node in the mean time. 2 mins later, the function will failed with an Error message pop up as the snapshot + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1585">https://github.com/harvester/harvester/issues/1585</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>An <strong>Timeout</strong> Option can be configured for support bundle</li> <li>Error message will display when reach timeout</li> </ul> <h2 id="case-generate-support-bundle-but-hit-timeout">Case: Generate support bundle but hit timeout</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Advanced Settings, modify <code>support-bundle-timeout</code> to <code>2</code></li> <li>Navigate to Support, Click <strong>Generate Support Bundle</strong>, and force shut down one of the node in the mean time.</li> <li><strong>2</strong> mins later, the function will failed with an Error message pop up as the snapshot <img src="https://user-images.githubusercontent.com/5169694/145191630-27ef156c-d8dd-4480-811c-c1ce39142491.png" alt="image"></li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/advanced/1661-vm-force-reset-policy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/1661-vm-force-reset-policy/ - Related issues: #1661 vm-force-deletion-policy for vm-force-reset-policy Environment setup Setup an airgapped harvester Create a 3 node harvester cluster Verification Steps Navigate to advanced settings and edit vm-force-reset-policy Set reset policy to 60 Create VM Run health checks Shut down node that is running VM Check for when it starts to migrate to new Host Expected Results It should migrate after 60 seconds + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1661">#1661</a> vm-force-deletion-policy for vm-force-reset-policy</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create a 3 node harvester cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Navigate to advanced settings and edit vm-force-reset-policy <img src="https://user-images.githubusercontent.com/83787952/146448317-a259d86d-2020-4bed-adc2-f19ecf0d3fbb.png" alt="image"></li> <li>Set reset policy to <code>60</code></li> <li>Create VM</li> <li>Run health checks</li> <li>Shut down node that is running VM</li> <li>Check for when it starts to migrate to new Host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>It should migrate after 60 seconds</li> </ol> diff --git a/manual/authentication/index.xml b/manual/authentication/index.xml index 9db21d064..4621d915a 100644 --- a/manual/authentication/index.xml +++ b/manual/authentication/index.xml @@ -12,63 +12,63 @@ https://harvester.github.io/tests/manual/authentication/general-authentication/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/general-authentication/ - Enable Access Control . Choose “Allow any valid User” as “Site Access”. Make sure any user is able to access the site. Enable Access Control . Choose “Restrict to Specific User” and add few users. Make sure only the specified users have access to the server. Others should get authentication error. Enable Access Control . Choose “Restrict to Specific User” and add a group. Make sure only all users belonging to the group have access to the server Others should get authentication error. + <ol> <li>Enable Access Control . Choose “Allow any valid User” as “Site Access”. Make sure any user is able to access the site.</li> <li>Enable Access Control . Choose “Restrict to Specific User” and add few users. Make sure only the specified users have access to the server. Others should get authentication error.</li> <li>Enable Access Control . Choose “Restrict to Specific User” and add a group. Make sure only all users belonging to the group have access to the server Others should get authentication error.</li> <li>Log in as a normal user (who has access to server but not a member of any environment) , a new default environment should be created for this user with this user account being the owner of the environment.Account entry should get created for this 1. user.</li> <li>Log in as a normal user (who has access to server) , create a new environment. This user should become the owner of the environment.</li> <li>As owner of environment , Add a user as “member” of an environment.Make sure this user gets access to this environment.</li> <li>As owner of environment , Add a user as “owner” of an environment.Make sure this user gets access to this environment. User should also have ability to manage this environment which is to add/delete member of the environment.</li> <li>As owner of environment , Add a group as “member” of an environment.Make sure that all users that belong to this group get access to the environment.</li> <li>As owner of environment , Add a group as “owner” of an environment.Make sure all users of the group gets access to this environment. User should also have ability to manage this environment which is to add/delete member of the environment.</li> <li>As owner of environment , change the role of a member of the environment from “owner” to “member”. Make sure his access control reflects this change.</li> <li>As owner of environment , change the role of a member of the environment from “member” to “owner”. Make sure his access control reflects this change.</li> <li>As owner of environment, remove an existing “owner” member of the environment.Make sure this user does not have access to environment anymore.</li> <li>As owner of environment, remove an existing “member” member of the environment.Make sure this user does not have access to environment anymore.</li> <li>As owner of environment, deactivate the environment. Members of the environment should have no access to environment. Owners should only be able to see in their manage environments but not list of active environments.</li> <li>As owner of environment, Activate a deactivated environment. Members of the environment should now have access to environment.</li> <li>As owner of environment, delete a deactivated environment.Members of the environment should not have access to environment. All hosts relating to the environment should be purged (only hosts created through docker-machine). Custom hosts will not be purged.</li> <li>As admin user, deactivate an existing account. Account should have no access to rancher server.</li> <li>As admin user, activate a deactivated account.Account should get back access to rancher server.</li> <li>As admin user, delete an existing account. Once account is purged, make sure that account is not a member of environments.</li> <li>Log in as a deleted account when account is still not purged.User should have no access to rancher server (like in deactivated state).</li> <li>Log in as a deleted account when account is purged.When user tries to log in , a new account entry will get created and it will not have any access to any existing environment this account had access to before the account was deleted.</li> <li>Delete a user that is a member of the project. List the member of the project , it should return the deleted as member of the project but should reflect the user as “unknown user”.</li> <li>As member user of environment, trying to add a member to environment should fail.</li> <li>As member user of environment, trying to delete an existing member to the environment should fail</li> <li>As member user of environment, trying to deactivate an environment should fail</li> <li>As member user of environment, trying to delete an environment should fail.</li> <li>As member user of environment, trying to change the role of an existing member to the environment should fail.</li> <li>As admin user, change account type of existing &ldquo;user&rdquo; to &ldquo;admin&rdquo;. Check that they have access to &ldquo;Admin&rdquo; tab.</li> </ol> <h2 id="special-characters-relating-test-cases">Special Characters relating test cases:</h2> <ol> <li>User name having special characters ( In this case user DN will have special characters)</li> <li>Group name having special characters( In this case group DN will have special characters)</li> <li>Password having special characters</li> </ol> <p>Test the above 3 test cases , by having &ldquo;required&rdquo; site access set for user/group as applicable.</p> Change user password (e2e_fe) https://harvester.github.io/tests/manual/authentication/1409-change-password/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/1409-change-password/ - Related issues: #1409 There&rsquo;s no way to change user password in single cluster UI Verification Steps Logged in with user Changed password Logged out Logged back in with new password Verified old password didn&rsquo;t work Expected Results Password should change and be accepted on new login Old password shouldn&rsquo;t work + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1409">#1409</a> There&rsquo;s no way to change user password in single cluster UI</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Logged in with user</li> <li>Changed password</li> <li>Logged out</li> <li>Logged back in with new password</li> <li>Verified old password didn&rsquo;t work</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Password should change and be accepted on new login</li> <li>Old password shouldn&rsquo;t work</li> </ol> Create SSH key from templates page https://harvester.github.io/tests/manual/authentication/1619-create-ssh-key-from-templates-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/1619-create-ssh-key-from-templates-page/ - Related issues: #1619 User is unable to create ssh key through the templates page Verification Steps on a harvester deployment, navigate to advanced -&gt; templates and click create Click create new under SSH section enter valid credentials and save Expected Results SSH key should be created and show in the SSH key section + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1619">#1619</a> User is unable to create ssh key through the templates page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>on a harvester deployment, navigate to advanced -&gt; templates and click create</li> <li>Click create new under SSH section</li> <li>enter valid credentials and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH key should be created and show in the SSH key section</li> </ol> First Time Login (e2e_fe) https://harvester.github.io/tests/manual/authentication/first-time-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/first-time-login/ - After successful installation of Harvester using Iso, on navigating to UI, user should be prompted to change the password. Verify the password rules Expected Results User should be able to login + <ol> <li>After successful installation of Harvester using Iso, on navigating to UI, user should be prompted to change the password.</li> <li>Verify the password rules</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to login</li> </ol> Login after password reset (e2e_fe) https://harvester.github.io/tests/manual/authentication/login-after-password-reset/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/login-after-password-reset/ - Enter the wrong credential. Enter the correct credential Expected Results Login should fail. Login should pass + <ol> <li>Enter the wrong credential.</li> <li>Enter the correct credential</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Login should fail.</li> <li>Login should pass</li> </ol> Logout from the UI and login again https://harvester.github.io/tests/manual/authentication/logout-then-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/logout-then-login/ - Logout from the UI and Log in again Expected Results User should be able to logout/login successfully. + <ol> <li>Logout from the UI and Log in again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to logout/login successfully.</li> </ol> Multi-browser login https://harvester.github.io/tests/manual/authentication/multi-browser-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/multi-browser-login/ - Login via Chrome, firefox, edge, safari etc Expected Results Chrome, firefox, edge, safari etc should have same behavior. + <ol> <li>Login via Chrome, firefox, edge, safari etc</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Chrome, firefox, edge, safari etc should have same behavior.</li> </ol> UI enables option to display password on login page https://harvester.github.io/tests/manual/authentication/ui_password_show_btn/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/ui_password_show_btn/ - Ref: https://github.com/harvester/harvester/issues/1550 Verify Items Password field in login page can be toggle show/hide Case: Toggle of Password field install harvester with any nodes setup password logout then login with password toggled + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1550">https://github.com/harvester/harvester/issues/1550</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Password field in login page can be toggle show/hide</li> </ul> <h2 id="case-toggle-of-password-field">Case: Toggle of Password field</h2> <ol> <li>install harvester with any nodes</li> <li>setup password</li> <li>logout then login with password toggled</li> </ol> Verify SSH key was added from Github during install https://harvester.github.io/tests/manual/authentication/verify-github-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/authentication/verify-github-ssh/ - Add ssh key from Github while installing the Harvester. Login Harvester with github. Expected Results User should be able to logout/login successfully. + <ol> <li>Add ssh key from Github while installing the Harvester.</li> <li>Login Harvester with github.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to logout/login successfully.</li> </ol> diff --git a/manual/backup-and-restore/index.xml b/manual/backup-and-restore/index.xml index c1f580330..31fc22298 100644 --- a/manual/backup-and-restore/index.xml +++ b/manual/backup-and-restore/index.xml @@ -12,252 +12,252 @@ https://harvester.github.io/tests/manual/backup-and-restore/backup_s3_permission/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup_s3_permission/ - Ref: https://github.com/harvester/harvester/issues/1339 Verify Items Backup target connect to S3 should only require the permission to access the specific bucket Case: S3 Backup with single-bucket-user Install Harvester with any nodes Setup Minio then follow the instruction to create a single-bucket-user. Create specific bucket for the user Create other buckets setup backup-target with the single-bucket-user permission When assign the dedicated bucket (for the user), connection should success. When assign other buckets, connection should failed with AccessDenied error message + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1339">https://github.com/harvester/harvester/issues/1339</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Backup target connect to S3 should only require the permission to access the specific bucket</li> </ul> <h2 id="case-s3-backup-with-single-bucket-user">Case: S3 Backup with <code>single-bucket-user</code></h2> <ol> <li>Install Harvester with any nodes</li> <li>Setup Minio <ol> <li>then follow the <a href="https://objectivefs.com/howto/how-to-restrict-s3-bucket-policy-to-only-one-aws-s3-bucket">instruction</a> to create a <code>single-bucket-user</code>.</li> <li>Create specific bucket for the user</li> <li>Create other buckets</li> </ol> </li> <li>setup <code>backup-target</code> with the <strong>single-bucket-user</strong> permission <ol> <li>When assign the dedicated bucket (for the user), connection should success.</li> <li>When assign other buckets, connection should failed with <strong>AccessDenied</strong> error message</li> </ol> </li> </ol> Backup Single VM (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm/ - Click take backup in virtual machine list Expected Results Backup should be created Backup should be listed in backups list Backup should be available on remote storage (S3/NFS) + <ol> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be created</li> <li>Backup should be listed in backups list</li> <li>Backup should be available on remote storage (S3/NFS)</li> </ol> Backup Single VM that has been live migrated before (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-that-has-been-live-migrated/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-that-has-been-live-migrated/ - Click take backup in virtual machine list Expected Results Backup should be created Backup should be listed in backups list Backup should be available on remote storage (S3/NFS) + <ol> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be created</li> <li>Backup should be listed in backups list</li> <li>Backup should be available on remote storage (S3/NFS)</li> </ol> Backup single VM with node off https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-node-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup-single-vm-node-off/ - On multi-node setup bring down node that is hosting VM Click take backup in virtual machine list Expected Results The backup should complete successfully Comments We do allow taking backup even if the VM is down, as you can take backup when the VM is off, this is because the volume still exists with longhorn&rsquo;s multi replicas, but weneed to check the data integrity. Known Bugs https://github.com/harvester/harvester/issues/1483 + <ol> <li>On multi-node setup bring down node that is hosting VM</li> <li>Click take backup in virtual machine list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The backup should complete successfully</li> </ol> <h2 id="comments">Comments</h2> <p>We do allow taking backup even if the VM is down, as you can take backup when the VM is off, this is because the volume still exists with longhorn&rsquo;s multi replicas, but weneed to check the data integrity.</p> Backup Target error message https://harvester.github.io/tests/manual/backup-and-restore/backup_target_errmsg/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/backup_target_errmsg/ - Ref: https://github.com/harvester/harvester/issues/1051 Verify Items Backup target should check input before Click Save Error message should displayed on edit page when input is wrong Case: Connect to invalid Backup Target Install Harvester with any node Login to dashboard, then navigate to Advanced Settings Edit backup-target,then input invalid data for NFS/S3 and click Save The Page should not be redirect to Advanced Settings Error Message should displayed under Save button + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1051">https://github.com/harvester/harvester/issues/1051</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Backup target should check input before Click <strong>Save</strong></li> <li>Error message should displayed on edit page when input is wrong</li> </ul> <h2 id="case-connect-to-invalid-backup-target">Case: Connect to invalid Backup Target</h2> <ol> <li>Install Harvester with any node</li> <li>Login to dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <strong>backup-target</strong>,then input invalid data for NFS/S3 and click <strong>Save</strong></li> <li>The Page should not be redirect to <strong>Advanced Settings</strong></li> <li>Error Message should displayed under <strong>Save</strong> button</li> </ol> Create Backup Target (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/create-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/create-backup-target/ - Open up Backup-target in settings Input server info Save Expected Results Backup Target should show in settings + <ol> <li>Open up Backup-target in settings</li> <li>Input server info</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup Target should show in settings</li> </ol> Delete backup from backups list (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-single-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-single-backup/ - Delete backup from backups list Expected Results Backup should be removed from list Backup should be removed from remote storage + <ol> <li>Delete backup from backups list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should be removed from list</li> <li>Backup should be removed from remote storage</li> </ol> Delete first backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-first-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-first-backup-chained-backup/ - Create a new VM Create a file named 1 and add text Create a backup Edit text in file 1 create file 2 Create Backup Edit file 2 text Create file 3 and add text Create backup Delete backup 1 Validate file 2 and 3 are the same as they were Restore to backup 2 Validate that md5sum -c file1-2.md5 file2.md5 file3.md5 file 1 is in second format file 2 is in first format file 3 doesn&rsquo;t exist Expected Results Vm should create All file operations should create Backup should run All file operations should create Backup should run All file operations should create files should be as expected + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add text</li> <li>Create a backup</li> <li>Edit text in file 1</li> <li>create file 2</li> <li>Create Backup</li> <li>Edit file 2 text</li> <li>Create file 3 and add text</li> <li>Create backup</li> <li>Delete backup 1</li> <li>Validate file 2 and 3 are the same as they were</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5</code></li> <li>file 1 is in second format</li> <li>file 2 is in first format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete last backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-last-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-last-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup delete backup 3 Validate that files didn&rsquo;t change Restore to backup 2 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>delete backup 3</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5 </code></li> <li>file 1 is in second format</li> <li>file 2 is in original format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete middle backup in chained backup (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/delete-middle-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-middle-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Delete backup 2 Validate file 2 and 3 are the same as they were Restore to backup 1 Validate that md5sum -c file1. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Delete backup 2</li> <li>Validate file 2 and 3 are the same as they were</li> <li>Restore to backup 1</li> <li>Validate that <ul> <li><code>md5sum -c file1.md5 file2.md5 file3.md5 </code></li> <li>file 1 is in original format - md5sum-1</li> <li>file 2 doesn&rsquo;t exist</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> <li>Validate data by restoring other backups also.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Delete multiple backups https://harvester.github.io/tests/manual/backup-and-restore/delete-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/delete-multiple-backups/ - Select multiple Backups from Backups list Click Delete Expected Results Backups should be removed from list Backups should be removed from remote storage + <ol> <li>Select multiple Backups from Backups list</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backups should be removed from list</li> <li>Backups should be removed from remote storage</li> </ol> Edit backup read YAML from file https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-read-yaml-from-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-read-yaml-from-file/ - Edit YAML for backup Read from File Show Diff Save Expected Results Diff should show changes Backup should be updated + <ol> <li>Edit YAML for backup</li> <li>Read from File</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Diff should show changes</li> <li>Backup should be updated</li> </ol> Edit backup via YAML (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/edit-backup-yaml/ - Edit YAML for backup Show Diff Save Expected Results Diff should show changes Backup should be updated + <ol> <li>Edit YAML for backup</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Diff should show changes</li> <li>Backup should be updated</li> </ol> Filter backups https://harvester.github.io/tests/manual/backup-and-restore/filter-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/filter-backups/ - Enter in string in filter input field Columns available for matching: State &ldquo;Ready&rdquo; &ldquo;Progressing&rdquo; Name Target VM With string With matching string Input Clear With non-matching string Input Clear Clear String Expected Results List should filter based on string List should re-populate after clearing string + <ol> <li>Enter in string in filter input field <ul> <li>Columns available for matching: <ul> <li>State <ul> <li>&ldquo;Ready&rdquo;</li> <li>&ldquo;Progressing&rdquo;</li> </ul> </li> <li>Name</li> <li>Target VM</li> </ul> </li> <li>With string <ul> <li>With matching string <ul> <li>Input</li> <li>Clear</li> </ul> </li> <li>With non-matching string <ul> <li>Input Clear</li> <li>Clear String</li> </ul> </li> </ul> </li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>List should filter based on string</li> <li>List should re-populate after clearing string</li> </ol> Negative create backup on store that is full (NFS) https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-full-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-full-backup-target/ - Initiate a backup with existing VM where the NFS store is full Expected Results You should get an error + <ol> <li>Initiate a backup with existing VM where the NFS store is full</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative Create Backup Target https://harvester.github.io/tests/manual/backup-and-restore/negative-create-backup-target/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-create-backup-target/ - Open up Backup-target in settings Input Incorrect server info Save Expected Results You should get an error on saving + <ol> <li>Open up Backup-target in settings</li> <li>Input Incorrect server info</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative delete backup while restore is in progress https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-backup-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-backup-while-restoring/ - Create a backup of VM which has data more than 10Gi. Add 2Gi data in the same VM. Initiate deletion of the backup. While deletion is in progress, create another backup Expected Results Creation of backup should be prevented as there is a deletion is in progress. Once the deletion is completed, the backup creation should take place + <ol> <li>Create a backup of VM which has data more than 10Gi.</li> <li>Add 2Gi data in the same VM.</li> <li>Initiate deletion of the backup.</li> <li>While deletion is in progress, create another backup</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Creation of backup should be prevented as there is a deletion is in progress.</li> <li>Once the deletion is completed, the backup creation should take place</li> </ol> Negative delete multiple backups https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-multiple-backups/ - Disconnect Backup Target Select multiple Backups from Backups list Click Delete Expected Results You should get an error + <ol> <li>Disconnect Backup Target</li> <li>Select multiple Backups from Backups list</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative delete single backup https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-single-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-delete-single-backup/ - Take down backup target either by account, or via network blocking Delete backup from backups list Expected Results You should get an error + <ol> <li>Take down backup target either by account, or via network blocking</li> <li>Delete backup from backups list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative disrupt backup server while restore is in progress https://harvester.github.io/tests/manual/backup-and-restore/negative-disrupt-backup-target-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-disrupt-backup-target-while-restoring/ - Initiate a backup restore from NFS server. Disconnect network from NFS server for 5 secs Verify the restore status Expected Results The restore is not be interrupted and should complete. Data should be intact + <ol> <li>Initiate a backup restore from NFS server.</li> <li>Disconnect network from NFS server for 5 secs</li> <li>Verify the restore status</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The restore is not be interrupted and should complete.</li> <li>Data should be intact</li> </ol> Negative edit backup read from file YAML https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-file/ - Disconnect backup target Edit YAML for backup Read from File Show Diff Save Expected Results You should get an error on saving + <ol> <li>Disconnect backup target</li> <li>Edit YAML for backup</li> <li>Read from File</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative edit backup YAML https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-edit-backup-yaml/ - Disconnect backup target Edit YAML for backup Show Diff Save Expected Results You should get an error on saving + <ol> <li>Disconnect backup target</li> <li>Edit YAML for backup</li> <li>Show Diff</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on saving</li> </ol> Negative initiate a backup while system is taking another backup https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-while-taking-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-backup-while-taking-backup/ - Start a VM backup, bk-1 of a VM which has data d1 While the backup is in progress, write some more data d2 in the VM disk and initiate another backup bk-2. Verify the backup 1 and backup 2 Expected Results Backup bk-1 should have only d1 data backup bk-2 should have data d1 and d2 + <ol> <li>Start a VM backup, bk-1 of a VM which has data d1</li> <li>While the backup is in progress, write some more data d2 in the VM disk and initiate another backup bk-2.</li> <li>Verify the backup 1 and backup 2</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup bk-1 should have only d1 data</li> <li>backup bk-2 should have data d1 and d2</li> </ol> Negative Power down the node where the VM is getting replaced by the restore https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring-replace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring-replace/ - Related issues tests#1263 [ReleaseTesting] Negative Power down the node where the VM is getting replaced by the restore Verification Steps Setup a 3 nodes harvester Create a VM w/ extra disk and some data Backup and shutdown VM Start to observe pod/virt-launcher-VMNAME to get the node VM restoring on for next step. Initiate a restore with existing VM, get node info from pod/virt-launcher-VMNAME. While the restore is in progress and VM is starting on a node, shut down the node + <ul> <li>Related issues <ul> <li><a href="https://github.com/harvester/tests/issues/1263">tests#1263</a> [ReleaseTesting] Negative Power down the node where the VM is getting replaced by the restore</li> </ul> </li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Setup a 3 nodes harvester</p> </li> <li> <p>Create a VM w/ extra disk and some data</p> </li> <li> <p>Backup and shutdown VM</p> </li> <li> <p>Start to observe <code>pod/virt-launcher-VMNAME</code> to get the node VM restoring on for next step.</p> Negative power down the node where the VM is getting restored https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-power-down-node-while-restoring/ - Initiate a restore. While the restore is in progress and VM is starting on a node, shut down the node Expected Results The restore should fail + <ol> <li>Initiate a restore.</li> <li>While the restore is in progress and VM is starting on a node, shut down the node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The restore should fail</li> </ol> Negative restore backup replace existing VM https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace/ - On multi-node setup bring down node that is hosting VM Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Expected Results You should get an error on restoring + <ol> <li>On multi-node setup bring down node that is hosting VM</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error on restoring</li> </ol> Negative restore backup replace existing VM with backup from same VM that is turned on https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-deleting-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-deleting-backup/ - Make sure VM is turned on Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Delete backup while restoring Expected Results You should get an error + <ol> <li>Make sure VM is turned on</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Delete backup while restoring</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> Negative restore backup replace existing VM with backup from same VM that is turned on (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-turned-on/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/negative-restore-backup-replace-while-turned-on/ - Make sure VM is turned on Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Expected Results You get an error that you have to stop VM before restoring backup + <ol> <li>Make sure VM is turned on</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You get an error that you have to stop VM before restoring backup</li> </ol> Restore backup create new vm (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm/ - Create a new file before restoring the backup and add some data Stop the VM where the backup was taken Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Create a new file before restoring the backup and add some data</li> <li>Stop the VM where the backup was taken</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore backup create new vm in another namespace https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm-in-another-namespace/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-create-new-vm-in-another-namespace/ - Create a VM vm in namespace default. Create a file ~/test.txt with content test. Create a VMBackup default-vm-backup for it. Create a new namepsace new-ns. Create a VMRestore restore-default-vm-backup-to-new-ns in new-ns namespace based on the VMBackup default-vm-backup to create a new VM. Expected Results A new VM in new-ns namespace should be created. It should have the file ~/test.txt with content test. + <ol> <li>Create a VM <code>vm</code> in namespace <code>default</code>.</li> <li>Create a file <code>~/test.txt</code> with content <code>test</code>.</li> <li>Create a VMBackup <code>default-vm-backup</code> for it.</li> <li>Create a new namepsace <code>new-ns</code>.</li> <li>Create a VMRestore <code>restore-default-vm-backup-to-new-ns</code> in <code>new-ns</code> namespace based on the VMBackup <code>default-vm-backup</code> to create a new VM.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>A new VM in <code>new-ns</code> namespace should be created.</li> <li>It should have the file <code>~/test.txt</code> with content <code>test</code>.</li> </ol> Restore Backup for VM that was live migrated (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-for-vm-live-migrated/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-for-vm-live-migrated/ - Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore backup replace existing VM with backup from same VM (e2e_be) https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-replace-existing/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-backup-replace-existing/ - Create a new file before restoring the backup and add some data Stop the VM Navigate to backups list Click restore Backup Select appropriate option Select backup Click restore Validate that new file is no longer present on machine Expected Results Backup should restore VM should update to previous backup File should no longer be present + <ol> <li>Create a new file before restoring the backup and add some data</li> <li>Stop the VM</li> <li>Navigate to backups list</li> <li>Click restore Backup</li> <li>Select appropriate option</li> <li>Select backup</li> <li>Click restore</li> <li>Validate that new file is no longer present on machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Backup should restore</li> <li>VM should update to previous backup</li> <li>File should no longer be present</li> </ol> Restore First backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-first-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-first-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 1 Validate that md5sum -c file1. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 1</li> <li>Validate that <ul> <li><code>md5sum -c file1.md5 file2.md5 file3.md5</code></li> <li>file 1 is in original format - md5sum-1</li> <li>file 2 doesn&rsquo;t exist</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Restore last backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-last-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-last-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 3 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 3</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2-2.md5 file3.md5 </code></li> <li>file 1 is in second format</li> <li>file 2 is in second format</li> <li>file 3 matches</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> Restore middle backup in chained backup https://harvester.github.io/tests/manual/backup-and-restore/restore-middle-backup-chained-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/restore-middle-backup-chained-backup/ - Create a new VM Create a file named 1 and add some data using command dd if=/dev/urandom of=file1.txt count=100 bs=1M Compute md5sum : md5sum-1 Create a backup Overwrite file 1 Create file 2 Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3 Create Backup Overwrite the file 2 Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5 Create backup Validate that files didn&rsquo;t change Restore to backup 2 Validate that md5sum -c file1-2. + <ol> <li>Create a new VM</li> <li>Create a file named 1 and add some data using command <code>dd if=/dev/urandom of=file1.txt count=100 bs=1M</code></li> <li>Compute md5sum : md5sum-1</li> <li>Create a backup</li> <li>Overwrite file 1</li> <li>Create file 2</li> <li>Compute md5sum for file 1 and file 2 : md5sum-2, md5sum-3</li> <li>Create Backup</li> <li>Overwrite the file 2</li> <li>Create file 3 and compute md5sum for file 2 and file 3 : md5sum-4, md5sum-5</li> <li>Create backup</li> <li>Validate that files didn&rsquo;t change</li> <li>Restore to backup 2</li> <li>Validate that <ul> <li><code>md5sum -c file1-2.md5 file2.md5 file3.md5</code></li> <li>file 1 is in second format</li> <li>file 2 is in original format</li> <li>file 3 doesn&rsquo;t exist</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Vm should create</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>Backup should run</li> <li>All file operations should create</li> <li>files should be as expected</li> </ol> VM Backup with metadata https://harvester.github.io/tests/manual/backup-and-restore/vm_backup_metadata/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/backup-and-restore/vm_backup_metadata/ - Ref: https://github.com/harvester/harvester/issues/988 Verify Items Metadata should be removed along with VM deleted Metadata should be synced after backup target switched Metadata can be used in new cluster Case: Metadata create and delete Install Harvester with any nodes Create an image for VM creation Setup NFS/S3 backup target Create a VM, then create a backup named backup1 File default-backup1.cfg should be exist in the backup target path &lt;backup root&gt;/harvester/vmbackups Delete the VM Backup backup1 File default-backup1. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/988">https://github.com/harvester/harvester/issues/988</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Metadata should be removed along with VM deleted</li> <li>Metadata should be synced after backup target switched</li> <li>Metadata can be used in new cluster</li> </ul> <h2 id="case-metadata-create-and-delete">Case: Metadata create and delete</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create an image for VM creation</li> <li>Setup NFS/S3 <strong>backup target</strong></li> <li>Create a VM, then create a backup named <code>backup1</code></li> <li>File <code>default-backup1.cfg</code> should be exist in the <strong>backup target</strong> path <code>&lt;backup root&gt;/harvester/vmbackups</code></li> <li>Delete the VM Backup <code>backup1</code></li> <li>File <code>default-backup1.cfg</code> should be removed</li> </ol> <h2 id="case-metadata-sync-after-backup-target-changed">Case: Metadata sync after backup target changed</h2> <ol> <li>Install Harvester with any nodes</li> <li>Create an image for VM creation</li> <li>Setup NFS <strong>backup target</strong></li> <li>Create VM <code>vm1</code>, then create file <code>tmp</code> with content <code>first</code> in the VM</li> <li>Backup <code>vm1</code> named <code>backup1</code></li> <li>Append content <code>second</code> into <code>tmp</code> file in the VM <code>vm1</code></li> <li>Backup <code>vm1</code> named <code>backup2</code></li> <li>Switch <strong>backup target</strong> to S3</li> <li>Delete backups and VM <code>vm1</code> in the dashboard</li> <li>Backup Files should be kept in the former <strong>backup target</strong></li> <li>Swithc <strong>backup target</strong> back</li> <li>Backups should be loaded in Dashboard&rsquo;s Backup page</li> <li>Restore <code>backup1</code> to <code>vm-b1</code></li> <li><code>vm-b1</code> should contain file which was created in <strong>Step 4</strong></li> <li>Restore <code>backup2</code> to <code>vm-b2</code></li> <li><code>vm-b2</code> should contain file which was modified in <strong>step 6</strong></li> <li>Repeat <strong>Step 3</strong> to <strong>Step 16</strong> with following Backup ordering</li> </ol> <ul> <li>S3 -&gt; NFS</li> <li>NFS -&gt; NFS</li> <li>S3 -&gt; S3</li> </ul> <h2 id="case-backup-rebuild-in-new-cluster">Case: Backup rebuild in new cluster</h2> <ol> <li>Repeat <strong>Case: Metadata create and delete</strong> as cluster A to generate backup data</li> <li>Installer another Harvester with any nodes as cluster B</li> <li>setup <strong>backup-target</strong> which contained old backup data</li> <li><strong>Backup Targets</strong> in <em>Backups</em> should show <code>Ready</code> state for all backups. (this will take few mins depends on network connection)</li> <li>Create image for backup <ol> <li>The image <strong>MUST</strong> use the same <code>storageClassName</code> name as the backup created.</li> <li><code>storageClassName</code> can be found in backup&rsquo;s <code>volumeBackups</code> in the YAML definition.</li> <li><code>storageClassName</code> can be assigned by <code>metadata.name</code> when creating image via YAML. For example, when you assign <code>metadata.name</code> as <code>image-dgf27</code>, the <code>storageClassName</code> will be named as <code>longhorn-image-dgf27</code></li> </ol> </li> <li>Restore backup to new VM</li> <li>VM should started successfully</li> <li>VM should contain those data that it was taken backup</li> </ol> diff --git a/manual/deployment/index.xml b/manual/deployment/index.xml index 5b4a2e967..cacaaaa56 100644 --- a/manual/deployment/index.xml +++ b/manual/deployment/index.xml @@ -12,196 +12,196 @@ https://harvester.github.io/tests/manual/deployment/1218-http-proxy-setting-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1218-http-proxy-setting-harvester/ - Related issue: #1218 Missing http proxy settings on rke2 and rancher pod Environment setup Setup an airgapped harvester Clone ipxe example repository https://github.com/harvester/ipxe-examples Edit the setting.xml file under vagrant ipxe example Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Verification Steps Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Create a virtual machine Prepare an S3 account with Bucket, Bucket region, Access Key ID and Secret Access Key Setup backup target in settings Edit virtual machine and take backup ssh to server node with user rancher Run kubectl create deployment nginx --image=nginx:latest on Harvester cluster Run kubectl get pods Expected Results At Step 2, Can download and create image from URL without error At step 6, Can backup running VM to external S3 storage correctly At step 6, Can delete backup from external S3 correctly At step 9, Can pull image from internet and deploy nginx pod in running status harvester-node-0:/home/rancher # kubectl create deployment nginx --image=nginx:latest deployment. + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1218">#1218</a> Missing http proxy settings on rke2 and rancher pod</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Clone ipxe example repository <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit the <code>setting.xml</code> file under vagrant ipxe example</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr Add a node to existing cluster (e2e_be) https://harvester.github.io/tests/manual/deployment/add-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/add-node-cluster/ - Start with harvester installer and select &lsquo;Join an existing Harvester cluster&rsquo; Provide the management ip and cluster token Expected Results On completion, Harvester should show the same management url as of existing node and status as ready. Check the host section, the joined node must appear + <ol> <li>Start with harvester installer and select &lsquo;Join an existing Harvester cluster&rsquo;</li> <li>Provide the management ip and cluster token</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion, Harvester should show the same management url as of existing node and status as ready.</li> <li>Check the host section, the joined node must appear</li> </ol> Additional trusted CA configure-ability https://harvester.github.io/tests/manual/deployment/additional-trusted-ca/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/additional-trusted-ca/ - Ref: https://github.com/harvester/harvester/issues/1260 Verify Items Image download with self-signed additional-ca VM backup with self-signed additional-ca Case: Image downlaod Install Harvester with ipxe-example which includes https://github.com/harvester/ipxe-examples/pull/36 Upload any valid iso to pxe-server&rsquo;s /var/www/ Use Browser to access https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt; should be valid Add self-signed cert to Harvester Navigate to Harvester Advanced Settings, edit additional-ca cert content can be retrieved in pxe-server /etc/ssl/certs/nginx-selfsigned.crt Create Image with the same URL https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt; Image should be downloaded Case: VM backup Install Harvester with ipxe-example setup Minio in pxe-server follow instruction to download binary and start the service login to UI console then add region and create bucket follow instruction to generate self-signed cert with IP SANs restart service with self-signed cert Add self-signed cert to Harvester Add local Minio info as S3 into backup-target Backup-Target Should not pop up any Error Message Create Image for VM creation Create VM with any resource Perform VM backup VM&rsquo;s data Should be backup into Minio&rsquo;s folder + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1260">https://github.com/harvester/harvester/issues/1260</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Image download with self-signed additional-ca</li> <li>VM backup with self-signed additional-ca</li> </ul> <h3 id="case-image-downlaod">Case: Image downlaod</h3> <ol> <li>Install Harvester with ipxe-example which includes <a href="https://github.com/harvester/ipxe-examples/pull/36">https://github.com/harvester/ipxe-examples/pull/36</a></li> <li>Upload any valid iso to <strong>pxe-server</strong>&rsquo;s <code>/var/www/</code></li> <li>Use Browser to access <code>https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt;</code> should be valid</li> <li>Add self-signed cert to Harvester <ul> <li>Navigate to Harvester <em>Advanced Settings</em>, edit <em>additional-ca</em></li> <li>cert content can be retrieved in pxe-server <code>/etc/ssl/certs/nginx-selfsigned.crt</code></li> </ul> </li> <li>Create Image with the same URL <code>https://&lt;pxe-server-ip&gt;/&lt;iso-file&gt;</code></li> <li>Image should be downloaded</li> </ol> <h3 id="case-vm-backup">Case: VM backup</h3> <ol> <li>Install Harvester with ipxe-example</li> <li>setup <strong>Minio</strong> in pxe-server <ul> <li>follow <a href="https://docs.min.io/docs/minio-quickstart-guide.html">instruction</a> to download binary and start the service</li> <li>login to UI console then add region and create bucket</li> <li>follow <a href="https://docs.min.io/docs/how-to-secure-access-to-minio-server-with-tls.html#using-open-ssl">instruction</a> to generate self-signed cert with IP SANs</li> <li>restart service with self-signed cert</li> </ul> </li> <li>Add self-signed cert to Harvester</li> <li>Add local <strong>Minio</strong> info as S3 into <strong>backup-target</strong></li> <li>Backup-Target Should not pop up any Error Message</li> <li>Create Image for VM creation</li> <li>Create VM with any resource</li> <li>Perform VM backup</li> <li>VM&rsquo;s data Should be backup into <strong>Minio</strong>&rsquo;s folder</li> </ol> Automatically get VIP during PXE installation https://harvester.github.io/tests/manual/deployment/1410-pxe-installation-automatically-get-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1410-pxe-installation-automatically-get-vip/ - Related issues: #1410 Support getting VIP automatically during PXE boot installation Verification Steps Comment vip and vip_hw_addr in ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 Start vagrant-pxe-harvester Run kubectl get cm -n harvester-system vip Check whether we can get ip and hwAddress in it Run ip a show harvester-mgmt Check whether there are two IPs in it and one is the vip. Expected Results VIP should automatically be assigned + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1410">#1410</a> Support getting VIP automatically during PXE boot installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Comment <code>vip</code> and <code>vip_hw_addr</code> in <code>ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2</code></li> <li>Start vagrant-pxe-harvester</li> <li>Run <code>kubectl get cm -n harvester-system vip</code> <ul> <li>Check whether we can get <code>ip</code> and <code>hwAddress</code> in it</li> </ul> </li> <li>Run <code>ip a show harvester-mgmt</code> <ul> <li>Check whether there are two IPs in it and one is the vip.</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should automatically be assigned</li> </ol> Change DNS servers while installing https://harvester.github.io/tests/manual/deployment/1590-change-dns-server-for-install/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1590-change-dns-server-for-install/ - Related issues: #1590 Harvester installer can&rsquo;t resolve hostnames Known Issues When supplying multiple ip=&hellip; kernel cmdline arguments, only one of them will be configured by dracut, therefore only the configured interface would have ifcfg generated. So for now, we can&rsquo;t support multiple ip=&hellip; kernel cmdline arguments Verification Steps Because configuring the network of the installation environment only works with PXE installation, you could use ipxe-examples/vagrant-pxe-harvester/ to set it up. Be sure you can run setup_harvester. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1590">#1590</a> Harvester installer can&rsquo;t resolve hostnames</li> </ul> <h2 id="known-issues">Known Issues</h2> <p>When supplying multiple ip=&hellip; kernel cmdline arguments, only one of them will be configured by dracut, therefore only the configured interface would have ifcfg generated. So for now, we can&rsquo;t support multiple ip=&hellip; kernel cmdline arguments</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Because configuring the network of the installation environment only works with PXE installation, you could use ipxe-examples/vagrant-pxe-harvester/ to set it up. Be sure you can run setup_harvester.sh without any problem.</p> Change DNS settings on vagrant-pxe-harvester install https://harvester.github.io/tests/manual/deployment/ipxe-dns-change/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/ipxe-dns-change/ - Install using ipxe-examples Also change harvester_network_config.dns_servers in the settings.yml for the vagrant environment before deploy. This will change the DNS in the harvester OS config. If you also want to change the DNS for everything in the DHCP scope change harvester_network_config.dhcp_server.dns_server. Expected Results On completion of the installation, Harvester should provide the management url and show status. SSH into one of the nodes. If you use the default configuration you can use ssh rancher@192. + <ul> <li>Install using <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">ipxe-examples</a></li> <li>Also change <code>harvester_network_config.dns_servers</code> in the <code>settings.yml</code> for the vagrant environment before deploy. This will change the DNS in the harvester OS config.</li> <li>If you also want to change the DNS for everything in the DHCP scope change <code>harvester_network_config.dhcp_server.dns_server</code>.</li> </ul> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>SSH into one of the nodes. If you use the default configuration you can use <code>ssh rancher@192.168.0.30</code>.</li> <li>When you run <code>cat /etc/resolv.conf</code> the changed DNS records should show up</li> </ol> Http proxy setting on harvester https://harvester.github.io/tests/manual/deployment/http-proxy-setting-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/http-proxy-setting-harvester/ - Related issue: #1218 Missing http proxy settings on rke2 and rancher pod Related issue: #1012 Failed to create image when deployed in private network environment Category: Network Environment setup Setup an airgapped harvester Clone ipxe example repository https://github.com/harvester/ipxe-examples Edit the setting.xml file under vagrant ipxe example Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Verification Steps Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1218">#1218</a> Missing http proxy settings on rke2 and rancher pod</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1012">#1012</a> Failed to create image when deployed in private network environment</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Clone ipxe example repository <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Edit the <code>setting.xml</code> file under vagrant ipxe example</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr Install 2 node Harvester with a Harvester token with multiple words https://harvester.github.io/tests/manual/deployment/812-multiple-word-harvester-token/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/812-multiple-word-harvester-token/ - Related issues: #812 ISO install accepts multiple words for &lsquo;cluster token&rsquo; value resulting in failure to join cluster Verification Steps Start Harvester install from ISO At the &lsquo;Cluster token&rsquo; prompt, enter, here are words Proceed to complete the installation Boot a secondary host from the installation ISO and select the option to join an existing cluster At the &lsquo;Cluster token&rsquo; prompt, enter, here are words Proceed to complete the installation Verify both hosts show in hosts list at VIP Expected Results Install should complete successfully Host should add with no errors Both hosts should show up + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/812">#812</a> ISO install accepts multiple words for &lsquo;cluster token&rsquo; value resulting in failure to join cluster</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Start Harvester install from ISO</li> <li>At the &lsquo;Cluster token&rsquo; prompt, enter, <code>here are words</code></li> <li>Proceed to complete the installation</li> <li>Boot a secondary host from the installation ISO and select the option to join an existing cluster</li> <li>At the &lsquo;Cluster token&rsquo; prompt, enter, <code>here are words</code></li> <li>Proceed to complete the installation</li> <li>Verify both hosts show in hosts list at VIP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install should complete successfully</li> <li>Host should add with no errors</li> <li>Both hosts should show up</li> </ol> Install Harvester from USB disk https://harvester.github.io/tests/manual/deployment/install_via_usb/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_via_usb/ - Ref: https://github.com/harvester/harvester/issues/1200 Verify Items Harvester can be installed via USB stick Case: Install Harvester via USB disk Follow the instruction to create USB disk Harvester should able to be installed via the USB on UEFI-based bare metals + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1200">https://github.com/harvester/harvester/issues/1200</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Harvester can be installed via USB stick</li> </ul> <h2 id="case-install-harvester-via-usb-disk">Case: Install Harvester via USB disk</h2> <ol> <li>Follow <a href="https://docs.harvesterhci.io/v1.0/install/usb-install/">the instruction</a> to create USB disk</li> <li>Harvester should able to be installed via the USB on <strong>UEFI-based</strong> bare metals</li> </ol> Install Harvester on a bare Metal node using ISO image https://harvester.github.io/tests/manual/deployment/install-bare-metal-iso/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-bare-metal-iso/ - Install using ISO image Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/index/">Install using ISO image</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on a bare Metal node using PXE boot (e2e_be) https://harvester.github.io/tests/manual/deployment/install-bare-metal-pxe/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-bare-metal-pxe/ - Install Harvester using PXE boot Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/pxe-boot-install">Install Harvester using PXE boot</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on a virtual nested node using ISO image https://harvester.github.io/tests/manual/deployment/install-nested-virtualization/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install-nested-virtualization/ - Install using ISO image Expected Results On completion of the installation, Harvester should provide the management url and show status. Harvester and Longhorn components should be up and running in the cluster. Verify the memory, cpu and storage size shown on the Harvester UI + <p><a href="https://docs.harvesterhci.io/v1.3/install/index/">Install using ISO image</a></p> <h2 id="expected-results">Expected Results</h2> <ol> <li>On completion of the installation, Harvester should provide the management url and show status.</li> <li>Harvester and Longhorn components should be up and running in the cluster.</li> <li>Verify the memory, cpu and storage size shown on the Harvester UI</li> </ol> Install Harvester on NVMe SSD https://harvester.github.io/tests/manual/deployment/install_on_nvme/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_on_nvme/ - Ref: https://github.com/harvester/harvester/issues/1627 Verify Items Harvester can detect NVMe SSD when installing Harvester can be installed on NVMe SSD Case: Install Harvester on NVMe disk Create block image as NVMe disk Run dd if=/dev/zero of=/var/lib/libvirt/images/nvme145.img bs=1M count=148480 Then Change file owner chown qemu:qemu /var/lib/libvirt/images/nvme145.img Create VM via virt-manager Select Manual install, set Generic OS, Memory:9216, CPUs:8, Uncheck enable storage&hellip; and check customize configuration before install Select Firmware to use UEFI x86_64 (use usr/share/qemu/ovmf-x86_64-code. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1627">https://github.com/harvester/harvester/issues/1627</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Harvester can detect NVMe SSD when installing</li> <li>Harvester can be installed on NVMe SSD</li> </ul> <h2 id="case-install-harvester-on-nvme-disk">Case: Install Harvester on NVMe disk</h2> <ol> <li>Create block image as NVMe disk <ul> <li>Run <code>dd if=/dev/zero of=/var/lib/libvirt/images/nvme145.img bs=1M count=148480</code></li> <li>Then Change file owner <code>chown qemu:qemu /var/lib/libvirt/images/nvme145.img</code></li> </ul> </li> <li>Create VM via <em>virt-manager</em> <ul> <li>Select <em>Manual install</em>, set <strong>Generic OS</strong>, <code>Memory:9216</code>, <code>CPUs:8</code>, Uncheck <em><strong>enable storage&hellip;</strong></em> and check <strong>customize configuration before install</strong></li> <li>Select <em>Firmware</em> to use <strong>UEFI x86_64</strong> (use <code>usr/share/qemu/ovmf-x86_64-code.bin</code> in SUSE Leap 15.3)</li> <li>Select <em>Chipset</em> to use <strong>i440FX</strong></li> <li>Click <strong>Add Hardware</strong> to add CD-ROM including Harvester iso</li> <li>Update <strong>Boot Options</strong> to <strong>Enable boot menu</strong> and enable the CD-ROM</li> <li>edit XML with update <code>&lt;domain type=&quot;kvm&quot;&gt;</code> to <code>&lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt;</code></li> <li>append NVMe xml node into <strong>domain</strong>, then Begin Installation</li> </ul> </li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:commandline&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-drive&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;file=/var/lib/libvirt/images/nvme.img,if=none,id=D22,format=raw&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-device&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;nvme,drive=D22,serial=1234&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/qemu:commandline&gt;</span> </span></span></code></pr Install Option `HwAddr` for Network Interface https://harvester.github.io/tests/manual/deployment/hwaddr_configre_option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/hwaddr_configre_option/ - Ref: https://github.com/harvester/harvester/issues/1064 Verify Items Configure Option HwAddr is working on install configuration Case: Use HwAddr to install harvester via PXE Install Harvester with PXE installation, set hwAddr instead of name in install.networks Harvester should installed successfully + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1064">https://github.com/harvester/harvester/issues/1064</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Configure Option <code>HwAddr</code> is working on install configuration</li> </ul> <h3 id="case-use-hwaddr-to-install-harvester-via-pxe">Case: Use <code>HwAddr</code> to install harvester via PXE</h3> <ol> <li>Install Harvester with PXE installation, set <code>hwAddr</code> instead of <code>name</code> in <strong>install.networks</strong></li> <li>Harvester should installed successfully</li> </ol> Install Option `install.device` support symbolic link https://harvester.github.io/tests/manual/deployment/install_symblic_link/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/install_symblic_link/ - Ref: https://github.com/harvester/harvester/issues/1462 Verify Items Disk&rsquo;s symbolic link can be used in install configure option install.device Case: Harvester install with configure symbolic link on install.device Install Harvester with any nodes login to console, use ls -l /dev/disk/by-path to get disk&rsquo;s link name Re-install Harvester with configure file, with set the disk&rsquo;s link name instead. Harvester should be install successfully + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1462">https://github.com/harvester/harvester/issues/1462</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Disk&rsquo;s symbolic link can be used in install configure option <code>install.device</code></li> </ul> <h2 id="case-harvester-install-with-configure-symbolic-link-on-installdevice">Case: Harvester install with configure symbolic link on <code>install.device</code></h2> <ol> <li>Install Harvester with any nodes</li> <li>login to console, use <code>ls -l /dev/disk/by-path</code> to get disk&rsquo;s link name</li> <li>Re-install Harvester with configure file, with set the disk&rsquo;s link name instead.</li> <li>Harvester should be install successfully</li> </ol> Manual upgrade from 0.3.0 to 1.0.0 https://harvester.github.io/tests/manual/deployment/manual-upgrade-from-0.3.0-to-1.0.0/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/manual-upgrade-from-0.3.0-to-1.0.0/ - Related issues: #1644 Harvester pod crashes after upgrading from v0.3.0 to v1.0.0-rc1 (contain vm backup before upgrade) Related issues: #1588 VM backup cause harvester pod to crash Notice We recommend using zero downtime upgrade to upgrade harvester. Manual upgrade is for advance usage and purpose. Category: Manual Upgrade Verification Steps Download harvester v0.3.0 iso and do checksum Download harvester v1.0.0 iso and do checksum Use ISO Install a 4 nodes harvester cluster Create several OS images from URL Create ssh key Enable vlan network with harvester-mgmt Create virtual network vlan1 with id 1 Create 2 virtual machines ubuntu-vm: 2 core, 4GB memory, 30GB disk Setup backup target Take a backup from ubuntu vm Peform manual upgrade steps in the following docudment upgrade process Follow the manual upgrade steps to upgrade from v0. + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1644">#1644</a> Harvester pod crashes after upgrading from v0.3.0 to v1.0.0-rc1 (contain vm backup before upgrade)</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1588">#1588</a> VM backup cause harvester pod to crash</p> </li> </ul> <h2 id="notice">Notice</h2> <p>We recommend using zero downtime upgrade to upgrade harvester. Manual upgrade is for advance usage and purpose.</p> <h2 id="category">Category:</h2> <ul> <li>Manual Upgrade</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Download harvester v0.3.0 iso and do checksum</li> <li>Download harvester v1.0.0 iso and do checksum</li> <li>Use ISO Install a 4 nodes harvester cluster</li> <li>Create several OS images from URL</li> <li>Create ssh key</li> <li>Enable vlan network with <code>harvester-mgmt</code></li> <li>Create virtual network <code>vlan1</code> with id <code>1</code></li> <li>Create 2 virtual machines</li> </ol> <ul> <li>ubuntu-vm: 2 core, 4GB memory, 30GB disk</li> </ul> <ol> <li>Setup backup target</li> <li>Take a backup from ubuntu vm</li> <li>Peform manual upgrade steps in the following docudment</li> </ol> <p><strong>upgrade process</strong> Follow the manual upgrade steps to upgrade from v0.3.0 to v1.0.0-rc1 <a href="https://github.com/harvester/docs/blob/a4be9a58441eeee3b5564b70e499dc69c6040cc8/docs/upgrade.md">https://github.com/harvester/docs/blob/a4be9a58441eeee3b5564b70e499dc69c6040cc8/docs/upgrade.md</a></p> Power down a node out of three nodes available for the Cluster https://harvester.github.io/tests/manual/deployment/negative-power-off-one-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-power-off-one-node-cluster/ - Create a three nodes cluster for Harvester. Power down an added node. Expected Results On power down the node, the status of the node should become down. Harvester system system should be still up. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Power down an added node.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On power down the node, the status of the node should become down.</li> <li>Harvester system system should be still up.</li> </ol> Power down the management node. https://harvester.github.io/tests/manual/deployment/negative-power-down-management-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-power-down-management-node/ - Create a three nodes cluster for Harvester. Power down the first node which was added to the cluster. Expected Results On power down the node, the status of the node should become down. Harvester system system should be still up. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Power down the first node which was added to the cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On power down the node, the status of the node should become down.</li> <li>Harvester system system should be still up.</li> </ol> PXE instll without iso_url field https://harvester.github.io/tests/manual/deployment/1439-pxe-install-without-iso-url-field/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/1439-pxe-install-without-iso-url-field/ - Related issues: #1439 PXE boot installation doesn&rsquo;t give an error if iso_url field is missing Environment setup This is easiest to test with the vagrant setup at https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester edit https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27 to be blank Verification Steps Run the vagrant ./setup.sh from the vagrant repo Expected Results You should get an error in the console for the VM when installing + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1439">#1439</a> PXE boot installation doesn&rsquo;t give an error if iso_url field is missing</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This is easiest to test with the vagrant setup at <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></p> <ol> <li>edit <a href="https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27">https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27</a> to be blank</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Run the vagrant <code>./setup.sh</code> from the vagrant repo</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error in the console for the VM when installing</li> </ol> Reboot the management node/added node. https://harvester.github.io/tests/manual/deployment/negative-reboot-management-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/negative-reboot-management-node/ - Create a three nodes cluster for Harvester. Reboot the management node/added node. Expected Results Once the node is up after reboot, the node should become available in the cluster. + <ol> <li>Create a three nodes cluster for Harvester.</li> <li>Reboot the management node/added node.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Once the node is up after reboot, the node should become available in the cluster.</li> </ol> Remove a node from the existing cluster https://harvester.github.io/tests/manual/deployment/remove-node-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/remove-node-cluster/ - Remove node from the Harvester cluster using the Harvester UI Expected Results The components of Harvester should get cleaned up from the node. + <ol> <li>Remove node from the Harvester cluster using the Harvester UI</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The components of Harvester should get cleaned up from the node.</p> Verify and Configure Networking Connection (e2e_be) https://harvester.github.io/tests/manual/deployment/verify-network-connection/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-network-connection/ - Provide the hostName Select management NIC bond Select the IPv4 (Automatic and Static) Expected Results This value of hostname should be overwritten by DHCP if DHCP supplies a hostname for the system. If DHCP doesn&rsquo;t offer a hostname and this value is empty, a random hostname will be generated. + <ol> <li>Provide the hostName</li> <li>Select management NIC bond</li> <li>Select the IPv4 (Automatic and Static)</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>This value of hostname should be overwritten by DHCP if DHCP supplies a hostname for the system. If DHCP doesn&rsquo;t offer a hostname and this value is empty, a random hostname will be generated.</p> Verify Configuring SSH keys https://harvester.github.io/tests/manual/deployment/verify-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-ssh/ - Provide SSH keys while installing the Harvester. Verify user is able to login the node using that ssh key. Expected Results User should be able to login to the node using that ssh key. + <ol> <li>Provide SSH keys while installing the Harvester.</li> <li>Verify user is able to login the node using that ssh key.</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>User should be able to login to the node using that ssh key.</p> Verify Configuring via HTTP URL https://harvester.github.io/tests/manual/deployment/verify-http-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-http-config/ - Provide the remote Harvester config, you can find an example of the config I&rsquo;m using in the deployment test plan description Expected Results Check that all values are taking into account If you are using my config file, check: the node must be off after the installation the nvme and kvm modules are loaded the file /etc/test.txt exists with the correct rights the systcl values the env variable test_env should exist dns configured in /etc/resolv. + <ol> <li>Provide the remote Harvester config, you can find an example of the config I&rsquo;m using in the deployment test plan description</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check that all values are taking into account <ul> <li>If you are using my config file, check:</li> <li>the node must be off after the installation</li> <li>the nvme and kvm modules are loaded</li> <li>the file /etc/test.txt exists with the correct rights</li> <li>the systcl values</li> <li>the env variable test_env should exist</li> <li>dns configured in /etc/resolv.conf </li> <li>ntp configured in /etc/systemd/timesyncd.conf</li> </ul> </li> <li>Check the config file here: /oem/harvester.config</li> </ol> Verify the installation confirmation screen https://harvester.github.io/tests/manual/deployment/verify-installation-confirmation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-installation-confirmation/ - Verify all the details shown on the screen Expected Results The info should reflect all the user filled data. + <ol> <li>Verify all the details shown on the screen</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>The info should reflect all the user filled data.</p> Verify the Installer Options https://harvester.github.io/tests/manual/deployment/verify-installer-options/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-installer-options/ - Verify the following options available while installing the Harvester is working Installation target Node IP Cluster token Password DNS Server VIP HTTP Proxy NTP Address Expected Results Should show all the disks available. Verify the min and max length acceptable for cluster token. Verify the password rule + <ol> <li>Verify the following options available while installing the Harvester is working <ul> <li>Installation target</li> <li>Node IP</li> <li>Cluster token</li> <li>Password</li> <li>DNS Server</li> <li>VIP</li> <li>HTTP Proxy</li> <li>NTP Address</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Should show all the disks available.</li> <li>Verify the min and max length acceptable for cluster token.</li> <li>Verify the password rule</li> </ul> Verify the Proxy configuration https://harvester.github.io/tests/manual/deployment/verify-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-proxy/ - Provide a valid proxy address, verify it works after installation is complete. Provide empty proxy address. Expected Results For empty proxy address, by default DHCP should provide the management url and it should navigate to the Harvester UI. + <ul> <li>Provide a valid proxy address, verify it works after installation is complete.</li> <li>Provide empty proxy address.</li> </ul> <h2 id="expected-results">Expected Results</h2> <p>For empty proxy address, by default DHCP should provide the management url and it should navigate to the Harvester UI.</p> VIP Load balancer verification (e2e_be) https://harvester.github.io/tests/manual/deployment/verify-vip-load-balancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/deployment/verify-vip-load-balancer/ - Case DHCP Install Harvester on one Node Install with VIP pulling from DHCP Verify that IP is assigned via DHCP Add at least one additional node Use VIP address as management address for adding node Finish install of additional nodes Create new VM Connect to VM via web console Case Static IP Install Harvester on one Node Install with VIP set statically Verify that IP is assigned correctly Add at least one additional node Use VIP address as management address for adding node Finish install of additional nodes Create new VM Connect to VM via web console Expected Results Install of all nodes should complete New nodes should show up in hosts list via web UI at VIP VMs should create Console should open + <h2 id="case-dhcp">Case DHCP</h2> <ol> <li>Install Harvester on one Node <ul> <li>Install with VIP pulling from DHCP</li> <li>Verify that IP is assigned via DHCP </li> </ul> </li> <li>Add at least one additional node <ul> <li>Use VIP address as management address for adding node</li> </ul> </li> <li>Finish install of additional nodes</li> <li>Create new VM</li> <li>Connect to VM via web console</li> </ol> <h2 id="case-static-ip">Case Static IP</h2> <ol> <li>Install Harvester on one Node <ul> <li>Install with VIP set statically</li> <li>Verify that IP is assigned correctly</li> </ul> </li> <li>Add at least one additional node <ul> <li>Use VIP address as management address for adding node</li> </ul> </li> <li>Finish install of additional nodes</li> <li>Create new VM</li> <li>Connect to VM via web console</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Install of all nodes should complete</li> <li>New nodes should show up in hosts list via web UI at VIP</li> <li>VMs should create</li> <li>Console should open</li> </ol> diff --git a/manual/harvester-rancher/index.xml b/manual/harvester-rancher/index.xml index 809fc386f..d1ff5e775 100644 --- a/manual/harvester-rancher/index.xml +++ b/manual/harvester-rancher/index.xml @@ -12,525 +12,525 @@ https://harvester.github.io/tests/manual/harvester-rancher/1330-rancher-import-harvester-enhacement/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1330-rancher-import-harvester-enhacement/ - Related issues: #1330 Http proxy setting download image Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head Create an one node harvester cluster Both harvester and rancher have internet connection Verification Steps Access rancher dashboard Open Virtualization Management page Import existing harvester Copy the registration url Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Access harvester dashboard Edit cluster-registration-url in settings Paste the registration url and save Back to rancher and wait for harvester imported in Rancher Expected Results Harvester can be imported in rancher dashboard with running status Can access harvester in virtual machine page Can create harvester cloud credential Can load harvester cloud credential while creating harvester + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1330">#1330</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head </code></pre><ol start="2"> <li>Create an one node harvester cluster</li> <li>Both harvester and rancher have internet connection</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access rancher dashboard</li> <li>Open Virtualization Management page</li> <li>Import existing harvester</li> <li>Copy the registration url <img src="https://user-images.githubusercontent.com/29251855/143001156-31b06586-9b66-4016-a0f5-6dca92a7b2f6.png" alt="image"></li> <li>Create image from URL (change folder date to latest) <a href="https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img</a></li> <li>Access harvester dashboard</li> <li>Edit <code>cluster-registration-url</code> in settings <img src="https://user-images.githubusercontent.com/29251855/143771558-01398c11-8e3f-40c1-903e-2817cade80c8.png" alt="image"></li> <li>Paste the registration url and save</li> <li>Back to rancher and wait for harvester imported in Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester can be imported in rancher dashboard with <code>running</code> status</li> <li>Can access harvester in virtual machine page</li> <li>Can create harvester cloud credential</li> <li>Can load harvester cloud credential while creating harvester</li> </ol> 02-Integrate to Rancher from Harvester settings (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/02-integrate-rancher-from-harvester-settings/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/02-integrate-rancher-from-harvester-settings/ - Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head Create an one node harvester cluster Both harvester and rancher have internet connection Verification Steps Access rancher dashboard Open Virtualization Management page Import existing harvester Copy the registration url Create image from URL (change folder date to latest) https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img Access harvester dashboard Edit cluster-registration-url in settings Paste the registration url and save Back to rancher and wait for harvester imported in Rancher Expected Results Harvester can be imported in rancher dashboard with running status Can access harvester in virtual machine page Can create harvester cloud credential Can load harvester cloud credential while creating harvester + <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6-head </code></pre><ol start="2"> <li>Create an one node harvester cluster</li> <li>Both harvester and rancher have internet connection</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access rancher dashboard</li> <li>Open Virtualization Management page</li> <li>Import existing harvester</li> <li>Copy the registration url <img src="https://user-images.githubusercontent.com/29251855/143001156-31b06586-9b66-4016-a0f5-6dca92a7b2f6.png" alt="image"></li> <li>Create image from URL (change folder date to latest) <a href="https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img">https://cloud-images.ubuntu.com/focal/20211122/focal-server-cloudimg-amd64.img</a></li> <li>Access harvester dashboard</li> <li>Edit <code>cluster-registration-url</code> in settings <img src="https://user-images.githubusercontent.com/29251855/143771558-01398c11-8e3f-40c1-903e-2817cade80c8.png" alt="image"></li> <li>Paste the registration url and save</li> <li>Back to rancher and wait for harvester imported in Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester can be imported in rancher dashboard with <code>running</code> status</li> <li>Can access harvester in virtual machine page</li> <li>Can create harvester cloud credential</li> <li>Can load harvester cloud credential while creating harvester</li> </ol> 03-Manage VM in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/03-manage-vm-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/03-manage-vm-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Virtual Machine page Create a single instance virtual machine in Virtual Machines page Create multiple 3 instances virtual machines in Virtual Machines page Access and check virtual machine details Edit cpu, memory and network of one virtual machine Try Stop, Restart and Migrate virtual machine Try Clone virtual machine Try Delete virtual machine Use VM Template to create VM Expected Results Can create a single instance vm correctly Can create multiple instances vm correctly Can display all virtual machine information Can change cpu, memory and network and restart vm correctly Can Stop, Restart and Migrate virtual machine correctly Can Clone virtual machine correctly Can Delete virtual machine correctly + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Virtual Machine</code> page</li> <li>Create a single instance virtual machine in <code>Virtual Machines</code> page</li> <li>Create multiple 3 instances virtual machines in <code>Virtual Machines</code> page</li> <li>Access and check virtual machine details</li> <li>Edit cpu, memory and network of one virtual machine</li> <li>Try <code>Stop</code>, <code>Restart</code> and <code>Migrate</code> virtual machine</li> <li>Try <code>Clone</code> virtual machine</li> <li>Try <code>Delete</code> virtual machine</li> <li><code>Use VM Template</code> to create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create a single instance vm correctly</li> <li>Can create multiple instances vm correctly</li> <li>Can display all virtual machine information</li> <li>Can change cpu, memory and network and restart vm correctly</li> <li>Can <code>Stop</code>, <code>Restart</code> and <code>Migrate</code> virtual machine correctly</li> <li>Can <code>Clone</code> virtual machine correctly</li> <li>Can <code>Delete</code> virtual machine correctly</li> </ol> 04-Manage Node in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/04-manage-host-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/04-manage-host-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Host page Access and check node details Edit node config, change network and add disk Try to Cordon and decordon node Enable and disable Maintenance mode Expected Results Can diaply all node&rsquo;s information Can add disk to node correctly Can change network of node correctly Can Cordon and decordon node correctly Can enable and disable Maintenance mode + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Host</code> page</li> <li>Access and check node details</li> <li>Edit node config, change network and add disk</li> <li>Try to <code>Cordon</code> and <code>decordon</code> node</li> <li>Enable and disable <code>Maintenance mode</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can diaply all node&rsquo;s information</li> <li>Can add disk to node correctly</li> <li>Can change network of node correctly</li> <li>Can <code>Cordon</code> and <code>decordon</code> node correctly</li> <li>Can enable and disable <code>Maintenance mode</code></li> </ol> 05-Manage Image in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/05-manage-image-volume-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/05-manage-image-volume-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Images page Create an image from URL Create an image from file Delete created images Expected Results Can create an image from URL Can create an image from file Can create an image from file Can delete created images correctly + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Images</code> page</li> <li>Create an image from URL</li> <li>Create an image from file</li> <li>Delete created images</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create an image from URL</li> <li>Can create an image from file</li> <li>Can create an image from file</li> <li>Can delete created images correctly</li> </ol> 06-Manage Network in Downstream Harvester https://harvester.github.io/tests/manual/harvester-rancher/06-manage-network-in-downstream-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/06-manage-network-in-downstream-harvester/ - Prerequisite: Harvester already imported to Rancher Dashboard Open harvester from Virtualization Management page Open Network page Create an new virtual network Create a new virtual machine using the new virtual network Delete a virtual network Expected Results Can create an new virtual network Create create a new virtual machine using the new virtual network Virtual machine can retrieve ip address Can delete a virtual network + <p>Prerequisite: Harvester already imported to Rancher Dashboard</p> <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Open <code>Network</code> page</li> <li>Create an new virtual network</li> <li>Create a new virtual machine using the new virtual network</li> <li>Delete a virtual network</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create an new virtual network</li> <li>Create create a new virtual machine using the new virtual network</li> <li>Virtual machine can retrieve ip address</li> <li>Can delete a virtual network</li> </ol> 07-Add and grant project-owner user to harvester (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/07-rbac-add-grant-project-owner-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/07-rbac-add-grant-project-owner-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-owner and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-owner user Assign Owner role to it Logout current user from Rancher Login with project-owner Open harvester from Virtualization Management page Expected Results Can create project-owner and set password Can assign Owner role to project-owner in default Can login correctly with project-owner Can manage all default project resources including host, virtual machines, volumes, VM and network + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-owner</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-owner user</li> <li>Assign <code>Owner</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f3bb7b2d-f687-4cc0-bb98-f286f45ea17b" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-owner</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-owner</code> and set password</li> <li>Can assign <code>Owner</code> role to <code>project-owner</code> in default</li> <li>Can login correctly with <code>project-owner</code></li> <li>Can manage all <code>default</code> project resources including host, virtual machines, volumes, VM and network</li> </ol> 08-Add and grant project-readonly user to harvester https://harvester.github.io/tests/manual/harvester-rancher/08-rbac-add-grant-project-readonly-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/08-rbac-add-grant-project-readonly-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-readonly and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-readonly user Assign Read Only role to it Logout current user from Rancher Login with project-readonly Open harvester from Virtualization Management page Expected Results Can create project-readonly and set password Can assign Read Only role to project-readonly in default Can login correctly with project-readonly Can&rsquo;t see Host page in harvester Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-readonly</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-readonly user</li> <li>Assign <code>Read Only</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/0effd0f6-6e20-4415-801b-03c4c6294a24" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-readonly</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-readonly</code> and set password</li> <li>Can assign <code>Read Only</code> role to <code>project-readonly</code> in default</li> <li>Can login correctly with <code>project-readonly</code></li> <li>Can&rsquo;t see <code>Host</code> page in harvester</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip;</li> </ol> 09-Add and grant project-member user to harvester https://harvester.github.io/tests/manual/harvester-rancher/09-rbac-add-grant-project-member-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/09-rbac-add-grant-project-member-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-member and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-member user Assign Member role to it Logout current user from Rancher Login with project-member Open harvester from Virtualization Management page Expected Results Can create project-member and set password Can assign Member role to project-member in default Can login correctly with project-member Can&rsquo;t see Host page in harvester Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-member</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-member user</li> <li>Assign <code>Member</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cac6a089-833c-4d37-b0da-bd0ad08677c1" alt="image.png"></p> <ol> <li>Logout current user from Rancher</li> <li>Login with <code>project-member</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-member</code> and set password</li> <li>Can assign <code>Member</code> role to <code>project-member</code> in default</li> <li>Can login correctly with <code>project-member</code></li> <li>Can&rsquo;t see <code>Host</code> page in harvester</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip;</li> </ol> 10-Add and grant project-custom user to harvester https://harvester.github.io/tests/manual/harvester-rancher/10--rbacadd-grant-project-custom-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/10--rbacadd-grant-project-custom-user-harvester/ - Open Users &amp; Authentication Click Users and Create Create user name project-custom and set password Select Standard User in the Global permission Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of default project Search project-custom user Assign Custom role to it Set Create Namespace, Manage Volumes and View Volumes Logout current user from Rancher Login with project-custom Open harvester from Virtualization Management page Expected Results Can create project-custom and set password Can assign Custom role to project-custom in default Can login correctly with project-custom Can do Create Namespace, Manage Volumes and View Volumes in default project + <ol> <li>Open <code>Users &amp; Authentication</code></li> <li>Click <code>Users</code> and Create</li> <li>Create user name <code>project-custom</code> and set password</li> <li>Select <code>Standard User</code> in the Global permission</li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>default</code> project</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/25221ce8-909a-4532-85d0-5a1912528f37" alt="image.png"></p> <ol> <li>Search project-custom user</li> <li>Assign <code>Custom</code> role to it</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/70098173-d9b5-43f5-85ab-5011f8c7d7c0" alt="image.png"></p> <ol> <li>Set <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code></li> <li>Logout current user from Rancher</li> <li>Login with <code>project-custom</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create <code>project-custom</code> and set password</li> <li>Can assign <code>Custom</code> role to <code>project-custom</code> in default</li> <li>Can login correctly with <code>project-custom</code></li> <li>Can do <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code> in default project</li> </ol> 11-Create New Project in Harvester https://harvester.github.io/tests/manual/harvester-rancher/11-create-project-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/11-create-project-harvester/ - Open harvester from Virtualization Management page Click Projects/Namespaces Click Create Project Set CPU and Memory limit in Resource Quotas Change view to testProject only Create some images Create some volumes Create a virtual machine Expected Results Can creat project correctly in Projects/Namespaces page Can create images correctly Can create volumes correctly Can create virtual machine correctly + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Click Create Project</li> <li>Set CPU and Memory limit in <code>Resource Quotas</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4758318c-6e47-459e-95ef-5288c0a95d2a" alt="image.png"></p> <ol> <li>Change view to <code>testProject</code> only</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/3d0bc57b-ba09-44d4-9de1-8cc14ee87e0a" alt="image.png"></p> <ol> <li>Create some images</li> <li>Create some volumes</li> <li>Create a virtual machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can creat project correctly in <code>Projects/Namespaces</code> page</li> <li>Can create images correctly</li> <li>Can create volumes correctly</li> <li>Can create virtual machine correctly</li> </ol> 13-Add and grant project-owner user to custom project https://harvester.github.io/tests/manual/harvester-rancher/13-rbac-add-grant-project-owner-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/13-rbac-add-grant-project-owner-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-owner user Assign Owner role to it Logout current user from Rancher Login with project-owner Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Owner role to project-owner in testProject project Can manage all testProject project resources including host, virtual machines, volumes, VM and network + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-owner user</li> <li>Assign <code>Owner</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-owner</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Owner</code> role to <code>project-owner</code> in <code>testProject</code> project</li> <li>Can manage all <code>testProject</code> project resources including host, virtual machines, volumes, VM and network</li> </ol> 14-Add and grant project-readonly user to custom project https://harvester.github.io/tests/manual/harvester-rancher/14-rbac-add-grant-project-readonly-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/14-rbac-add-grant-project-readonly-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-readonly user Assign Read Only role to it Logout current user from Rancher Login with project-readonly Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Read Only role to in testProject project Can login correctly with project-readonly Can&rsquo;t see Host page in testProject only view Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in testProject only view + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-readonly user</li> <li>Assign <code>Read Only</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-readonly</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Read Only</code> role to in <code>testProject</code> project</li> <li>Can login correctly with <code>project-readonly</code></li> <li>Can&rsquo;t see <code>Host</code> page in <code>testProject</code> only view</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in <code>testProject</code> only view</li> </ol> 15-Add and grant project-member user to custom project https://harvester.github.io/tests/manual/harvester-rancher/15-rbac-add-grant-project-member-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/15-rbac-add-grant-project-member-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-member user Assign Member role to it Logout current user from Rancher Login with project-member Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Member role to project-member in testProject project Can login correctly with project-member Can&rsquo;t see Host page in testProject project Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in testProject project + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-member user</li> <li>Assign <code>Member</code> role to it</li> <li>Logout current user from Rancher</li> <li>Login with <code>project-member</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Member</code> role to <code>project-member</code> in <code>testProject</code> project</li> <li>Can login correctly with <code>project-member</code></li> <li>Can&rsquo;t see <code>Host</code> page in <code>testProject</code> project</li> <li>Can&rsquo;t create or edit any resource including virtual machines, volumes, Images &hellip; in <code>testProject</code> project</li> </ol> 16-Add and grant project-custom user to custom project https://harvester.github.io/tests/manual/harvester-rancher/16-rbac-add-grant-project-custom-user-custom/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/16-rbac-add-grant-project-custom-user-custom/ - Open harvester from Virtualization Management page Click Projects/Namespaces Edit config of testProject project Search project-custom user Assign Custom role to it Set Create Namespace, Manage Volumes and View Volumes Logout current user from Rancher Login with project-custom Open harvester from Virtualization Management page Change view to testProject only Expected Results Can assign Custom role to project-custom in testProject project Can login correctly with project-custom Can do Create Namespace, Manage Volumes and View Volumes in testProject project + <ol> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Click <code>Projects/Namespaces</code></li> <li>Edit config of <code>testProject</code> project</li> <li>Search project-custom user</li> <li>Assign <code>Custom</code> role to it</li> <li>Set <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code></li> <li>Logout current user from Rancher</li> <li>Login with <code>project-custom</code></li> <li>Open harvester from <code>Virtualization Management</code> page</li> <li>Change view to <code>testProject</code> only</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can assign <code>Custom</code> role to <code>project-custom</code> in <code>testProject</code> project</li> <li>Can login correctly with <code>project-custom</code></li> <li>Can do <code>Create Namespace</code>, <code>Manage Volumes</code> and <code>View Volumes</code> in <code>testProject</code> project</li> </ol> 17-Delete Imported Harvester Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/17-delete-imported-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/17-delete-imported-harvester-cluster/ - Finish 01-Import existing Harvester clusters in Rancher Open Virtualization Management page Delete already imported harvester Expected Results Can delete imported harvester correctly + <ol> <li>Finish 01-Import existing Harvester clusters in Rancher</li> <li>Open <code>Virtualization Management</code> page</li> <li>Delete already imported harvester</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can delete imported harvester correctly</li> </ol> 18-Delete Failed Imported Harvester Cluster https://harvester.github.io/tests/manual/harvester-rancher/18-delete-failed-imported-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/18-delete-failed-imported-harvester-cluster/ - Make failure in 01-Import existing Harvester clusters in Rancher Open Virtualization Management page Delete already imported harvester Expected Results Can delete imported harvester correctly + <ol> <li>Make failure in 01-Import existing Harvester clusters in Rancher</li> <li>Open <code>Virtualization Management</code> page</li> <li>Delete already imported harvester</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can delete imported harvester correctly</li> </ol> 20-Create RKE1 Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/20-create-rke1-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/20-create-rke1-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Expand RKE1 Configuration Add Template in Node template Select Harvester Select created cloud credential created Select default namespace Select ubuntu image Select network: vlan1 Provide SSH User: ubuntu Provide template name, click create Open Cluster page, click Create Toggle RKE1 Provide cluster name Provide Name Prefix + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Expand RKE1 Configuration</li> <li>Add Template in <code>Node template</code></li> <li>Select Harvester</li> <li>Select created cloud credential created</li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network: <code>vlan1</code></li> <li>Provide SSH User: <code>ubuntu</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/19ca6b90-4688-4ff3-8ecd-60982edf1950" alt="image.png"></p> <p><img src="https://user-images.githubusercontent.com/29251855/147911503-df997d2f-fa48-4ce9-876b-f309b1d6c7b1.png" alt="image"></p> <ol> <li> <p>Provide template name, click create <img src="https://user-images.githubusercontent.com/29251855/147911570-7868367e-7729-4c4d-bfef-01751c76ed75.png" alt="image"></p> 21-Delete RKE1 Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/21-delete-rke1-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/21-delete-rke1-kubernetes-cluster/ - Open Cluster Management Check provisioned RKE1 cluster Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned RKE1 cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 22-Create RKE2 Kubernetes Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/22-create-rke2-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/22-create-rke2-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click create and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click create and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cbd9cc9b-60fb-4e81-985a-13fcaa88fa2f" alt="image.png"></p> <ol> <li>Wait for RKE2 cluster provisioning complete (~20min)</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Provision RKE2 cluster successfully with <code>Running</code> status</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4526b95b-71f4-498f-b509-dea60ec5e0e5" alt="image.png"></p> 23-Delete RKE2 Kubernetes Cluster (e2e_be) https://harvester.github.io/tests/manual/harvester-rancher/23-delete-rke2-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/23-delete-rke2-kubernetes-cluster/ - Open Cluster Management Check provisioned RKE2 cluster Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned RKE2 cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 24-Delete RKE1 Kubernetes Cluster in Provisioning https://harvester.github.io/tests/manual/harvester-rancher/24-delete-rke1-kubernetes-cluster-provisioning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/24-delete-rke1-kubernetes-cluster-provisioning/ - Provision RKE1 Cluster Management When RKE1 cluster show Provisioning Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE1 Cluster Management</li> <li>When RKE1 cluster show <code>Provisioning</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 25-Delete RKE1 Kubernetes Cluster in Failure https://harvester.github.io/tests/manual/harvester-rancher/25-delete-rke1-kubernetes-cluster-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/25-delete-rke1-kubernetes-cluster-failure/ - Provision RKE1 Cluster Management When RKE1 cluster displayed in Failure Click Delete from menu Expected Results Can remove RKE1 Cluster and disapper on Cluster page RKE1 Cluster will be removed from rancher menu under explore cluster RKE1 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE1 Cluster Management</li> <li>When RKE1 cluster displayed in <code>Failure</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE1 Cluster and disapper on Cluster page</li> <li>RKE1 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE1 virtual machine should be also be removed from Harvester</li> </ol> 26-Delete RKE2 Kubernetes Cluster in Provisioning https://harvester.github.io/tests/manual/harvester-rancher/26-delete-rke2-kubernetes-cluster-provisioning/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/26-delete-rke2-kubernetes-cluster-provisioning/ - Provision RKE2 Cluster Management When RKE2 cluster show Provisioning Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE2 Cluster Management</li> <li>When RKE2 cluster show <code>Provisioning</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 27-Delete RKE2 Kubernetes Cluster in Failure https://harvester.github.io/tests/manual/harvester-rancher/27-delete-rke2-kubernetes-cluster-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/27-delete-rke2-kubernetes-cluster-failure/ - Provision RKE2 Cluster Management When RKE2 cluster displayed in Failure Click Delete from menu Expected Results Can remove RKE2 Cluster and disapper on Cluster page RKE2 Cluster will be removed from rancher menu under explore cluster RKE2 virtual machine should be also be removed from Harvester + <ol> <li>Provision RKE2 Cluster Management</li> <li>When RKE2 cluster displayed in <code>Failure</code></li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove RKE2 Cluster and disapper on Cluster page</li> <li>RKE2 Cluster will be removed from rancher menu under explore cluster</li> <li>RKE2 virtual machine should be also be removed from Harvester</li> </ol> 30-Configure Harvester LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/30-configure-harvester-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/30-configure-harvester-loadbalancer-service/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters Provide Listening port and Target port Click Add-on Config Select Health Check port Select dhcp as IPAM mode Provide Health Check Threshold Provide Health Check Failure Threshold Provide Health Check Period Provide Health Check Timeout Click Create button Create another load balancer service with the name characters. + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> <ol> <li>Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters</li> <li>Provide Listening port and Target port</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873" alt="image.png"></p> 31-Specify "pool" IPAM mode in LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Access Harvester dashboard UI Go to Settings Create a vip-pool in Harvester settings. Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name Provide Listending port and Target port Click Add-on Config Provide Health Check port Select pool as IPAM mode Provide Health Check Threshold Provide Health Check Failure Threshold Provide Health Check Period Provide Health Check Timeout Click Create button Expected Results Can create load balance service correctly Can operate and route to deployed service correctly + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Access Harvester dashboard UI</li> <li>Go to Settings</li> <li>Create a vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> 32-Deploy Harvester CSI provider to RKE 1 Cluster https://harvester.github.io/tests/manual/harvester-rancher/32-deploy-harvester-csi-provider-to-rke1-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/32-deploy-harvester-csi-provider-to-rke1-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE1 node template Provision a RKE1 cluster, check the Harvester as cloud provider Access RKE1 cluster Open charts in Apps &amp; Market page Install Harvester CSI driver Make sure CSI driver installed complete NAME: harvester-csi-driver LAST DEPLOYED: Thu Dec 16 03:59:54 2021 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Successfully deployed Harvester CSI driver to the kube-system namespace. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify steps</h3> <ol> <li>Environment preparation as above steps</li> <li>Import harvester to rancher from harvester settings</li> <li>Create cloud credential</li> <li>Create RKE1 node template <img src="https://user-images.githubusercontent.com/29251855/146299688-3875c18f-61d6-48e6-a15e-250d59c177ba.png" alt="image"></li> <li>Provision a RKE1 cluster, check the <code>Harvester</code> as cloud provider <img src="https://user-images.githubusercontent.com/29251855/146342214-568bf017-e0e2-4b3a-9f38-894eff77d439.png" alt="image"></li> <li>Access RKE1 cluster</li> <li>Open charts in Apps &amp; Market page</li> <li>Install Harvester CSI driver</li> <li>Make sure CSI driver installed complete</li> </ol> <pre tabindex="0"><code>NAME: harvester-csi-driver LAST DEPLOYED: Thu Dec 16 03:59:54 2021 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Successfully deployed Harvester CSI driver to the kube-system namespace. --------------------------------------------------------------------- SUCCESS: helm install --namespace=kube-system --timeout=10m0s --values=/home/shell/helm/values-harvester-csi-driver-100.0.0-up0.1.8.yaml --version=100.0.0+up0.1.8 --wait=true harvester-csi-driver /home/shell/helm/harvester-csi-driver-100.0.0-up0.1.8.tgz </code></pr 33-Deploy Harvester CSI provider to RKE 2 Cluster https://harvester.github.io/tests/manual/harvester-rancher/33-deploy-harvester-csi-provider-to-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/33-deploy-harvester-csi-provider-to-rke2-cluster/ - Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Check alread set Harvester as cloud provider Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services Check CSI driver installed and configured on RKE2 cluster + <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Check alread set <code>Harvester</code> as cloud provider</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/514d1d88-08e7-441a-861c-38bb3c96bbe7" alt="image.png"></p> <ol> <li>Click Create</li> <li>Wait for RKE2 cluster provisioning complete (~20min)</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Provision RKE2 cluster successfully with <code>Running</code> status</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4526b95b-71f4-498f-b509-dea60ec5e0e5" alt="image.png"></p> <ol> <li>Can acccess RKE2 cluster to check all resources and services</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/682dccdc-cc0b-427f-ab7a-fdfaa1f82e06" alt="image.png"></p> 34-Hot plug and unplug volumes in RKE1 cluster https://harvester.github.io/tests/manual/harvester-rancher/34-hotplug-unplug-volumes-in-rke1-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/34-hotplug-unplug-volumes-in-rke1-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify Steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE1 node template Provision a RKE1 cluster, check the Harvester as cloud provider Access RKE1 cluster Open charts in Apps &amp; Market page Install harvester cloud provider and CSI driver + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify Steps</h3> <ol> <li> <p>Environment preparation as above steps</p> </li> <li> <p>Import harvester to rancher from harvester settings</p> </li> <li> <p>Create cloud credential</p> </li> <li> <p>Create RKE1 node template <img src="https://user-images.githubusercontent.com/29251855/146299688-3875c18f-61d6-48e6-a15e-250d59c177ba.png" alt="image"></p> </li> <li> <p>Provision a RKE1 cluster, check the <code>Harvester</code> as cloud provider <img src="https://user-images.githubusercontent.com/29251855/146342214-568bf017-e0e2-4b3a-9f38-894eff77d439.png" alt="image"></p> 35-Hot plug and unplug volumes in RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/35-hotplug-unplug-volumes-in-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/35-hotplug-unplug-volumes-in-rke2-cluster/ - Related task: #1396 Integration Cloud Provider for RKE1 with Rancher Environment Setup Docker install rancher v2.6.3 Create one node harvester with enough resource Verify Steps Environment preparation as above steps Import harvester to rancher from harvester settings Create cloud credential Create RKE2 cluster as test case #34 Access RKE2 cluster Open charts in Apps &amp; Market page Install harvester cloud provider and CSI driver Make sure cloud provider installed complete + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.3</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify Steps</h3> <ol> <li> <p>Environment preparation as above steps</p> </li> <li> <p>Import harvester to rancher from harvester settings</p> </li> <li> <p>Create cloud credential</p> </li> <li> <p>Create RKE2 cluster as test case #34</p> </li> <li> <p>Access RKE2 cluster</p> 36-Remove Harvester LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/36-remove-harvester-loadbalancer-service/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/36-remove-harvester-loadbalancer-service/ - Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Delete previous created load balancer service Expected Results Can remove load balance service correctly Service will be removed from assigned Apps + <ol> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Delete previous created load balancer service</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove load balance service correctly</li> <li>Service will be removed from assigned Apps</li> </ol> 37-Import Online Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher-copy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher-copy/ - Environment Setup Setup the online harvester Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server Move to vagrant pxe harvester folder Execute vagrant ssh pxe_server Run apt-get install squid Edit /etc/squid/squid. + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the online harvester</p> <ol> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> <ol> <li>Move to vagrant pxe harvester folder</li> <li>Execute <code>vagrant ssh pxe_server</code></li> <li>Run <code>apt-get install squid</code></li> <li>Edit <code>/etc/squid/squid.conf</code> and add line</li> </ol> <pre tabindex="0"><code>http_access allow all http_port 3128 </code></pr 37-Import Online Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/37-import-online-harvester-from-airgapped-rancher/ - Environment Setup Setup the online harvester Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server Move to vagrant pxe harvester folder Execute vagrant ssh pxe_server Run apt-get install squid Edit /etc/squid/squid. + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the online harvester</p> <ol> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> <ol> <li>Move to vagrant pxe harvester folder</li> <li>Execute <code>vagrant ssh pxe_server</code></li> <li>Run <code>apt-get install squid</code></li> <li>Edit <code>/etc/squid/squid.conf</code> and add line</li> </ol> <pre tabindex="0"><code>http_access allow all http_port 3128 </code></pr 38-Import Airgapped Harvester From the Airgapped Rancher https://harvester.github.io/tests/manual/harvester-rancher/38-import-airgapped-harvester-from-airgapped-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/38-import-airgapped-harvester-from-airgapped-rancher/ - Related task: #1052 Test Air gap with Rancher integration Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1052">#1052</a> Test Air gap with Rancher integration</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 39-Standard user no Harvester Access https://harvester.github.io/tests/manual/harvester-rancher/39-rbac-standard-user-no-access/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/39-rbac-standard-user-no-access/ - As admin import/register a harvester cluster in Rancher As admin, Enable Harvester node driver As a standard user User1, login to rancher Verify User1 has no access to harvester cluster in Virtualization management page Verify User1 can not create harvester cloud credential as User1 Verify User1 can not use this cloud credential to create a node template and can not use a node driver cluster 3 and can not CRUD each resource + <ol> <li>As admin import/register a harvester cluster in Rancher</li> <li>As admin, Enable Harvester node driver</li> <li>As a standard user User1, login to rancher</li> <li>Verify User1 has no access to harvester cluster in Virtualization management page</li> <li>Verify User1 can not create harvester cloud credential as User1</li> <li>Verify User1 can not use this cloud credential to create a node template and can not use a node driver cluster 3 and can not CRUD each resource</li> </ol> 40-RBAC Add restricted admin User Harvester https://harvester.github.io/tests/manual/harvester-rancher/40-rbac-add-restricted-admin-user-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/40-rbac-add-restricted-admin-user-harvester/ - As admin import/register a harvester cluster in Rancher create restricted admin user rstradm verify rstradm has access to to Virturalization management page and the harvester cluster is listed Verify rstradm has access to Harvester UI through rancher by selecting it from the list in step 3 and can CRUD each resource + <ol> <li>As admin import/register a harvester cluster in Rancher</li> <li>create restricted admin user rstradm</li> <li>verify rstradm has access to to Virturalization management page and the harvester cluster is listed</li> <li>Verify rstradm has access to Harvester UI through rancher by selecting it from the list in step 3 and can CRUD each resource</li> </ol> 41-Import Harvester into nested Rancher https://harvester.github.io/tests/manual/harvester-rancher/41-rancher-nested-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/41-rancher-nested-harvester/ - Prerequisite: External network on VLAN Install Rancher in a VM using Docker method on Harvester cluster using the external VLAN Login rancher dashboard Navigate to Virtual Management Page Click import existing Copy the curl command SSH to harvester master node (user: rancher) Execute the curl command to import harvester to rancher curl --insecure -sfL https://192.168.50.82/v3/import/{identifier}.yaml | kubectl apply -f - Run sudo chmod 775 /etc/rancher/rke2/rke2.yaml to solve the permission denied error Run curl command again, you should see the following successful import message namespace/cattle-system configured serviceaccount/cattle created clusterrolebinding. + <p>Prerequisite: External network on VLAN</p> <ol> <li>Install Rancher in a VM using Docker method on Harvester cluster using the external VLAN</li> <li>Login rancher dashboard</li> <li>Navigate to Virtual Management Page</li> <li>Click import existing</li> <li>Copy the curl command <img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/08e70d37-e573-47b1-a3d6-0f3615116d48" alt="image.png"></li> <li>SSH to harvester master node (user: rancher)</li> <li>Execute the curl command to import harvester to rancher <code>curl --insecure -sfL https://192.168.50.82/v3/import/{identifier}.yaml | kubectl apply -f -</code></li> <li>Run <code>sudo chmod 775 /etc/rancher/rke2/rke2.yaml</code> to solve the permission denied error</li> <li>Run curl command again, you should see the following successful import message <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>namespace/cattle-system configured </span></span><span style="display:flex;"><span>serviceaccount/cattle created </span></span><span style="display:flex;"><span>clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created </span></span><span style="display:flex;"><span>secret/cattle-credentials-413137f created </span></span><span style="display:flex;"><span>clusterrole.rbac.authorization.k8s.io/cattle-admin created </span></span><span style="display:flex;"><span>deployment.apps/cattle-cluster-agent created </span></span><span style="display:flex;"><span>service/cattle-cluster-agent created </span></span></code></pr 42-Add cloud credential KUBECONFIG https://harvester.github.io/tests/manual/harvester-rancher/42-add-cloud-credential-kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/42-add-cloud-credential-kubeconfig/ - Prerequisite: KUBECONFIG from Harvester Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select external cluster Input KUBECONFIG from Harvester Click Create + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select external cluster</li> <li>Input KUBECONFIG from Harvester</li> <li>Click Create</li> </ol> <p><img src="https://user-images.githubusercontent.com/83787952/134994316-30438401-b80f-47a9-bbe4-122bf0a2a69f.jpg" alt="image.png"></p> 43-Scale up node driver RKE1 https://harvester.github.io/tests/manual/harvester-rancher/43-node-driver-scale-up-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/43-node-driver-scale-up-rke1/ - Prerequisite: RKE1 cluster in Harvester with at least 2 worker nodes provision a multinode cluster using harvester node driver with at least 2 worker nodes scale up a node in the cluster + <p>Prerequisite: RKE1 cluster in Harvester with at least 2 worker nodes</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale up a node in the cluster</li> </ol> 44-Scale up node driver RKE2 https://harvester.github.io/tests/manual/harvester-rancher/44-node-driver-scale-up-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/44-node-driver-scale-up-rke2/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale up a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale up a node in the cluster</li> </ol> 45-Scale down node driver RKE1 https://harvester.github.io/tests/manual/harvester-rancher/45-node-driver-scale-down-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/45-node-driver-scale-down-rke1/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale down a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale down a node in the cluster</li> </ol> 46-Scale down node driver RKE2 https://harvester.github.io/tests/manual/harvester-rancher/46-node-driver-scale-down-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/46-node-driver-scale-down-rke2/ - Prerequisite: KUBECONFIG from Harvester provision a multinode cluster using harvester node driver with at least 2 worker nodes scale down a node in the cluster + <p>Prerequisite: KUBECONFIG from Harvester</p> <ol> <li>provision a multinode cluster using harvester node driver with at least 2 worker nodes</li> <li>scale down a node in the cluster</li> </ol> 49-Overprovision Harvester https://harvester.github.io/tests/manual/harvester-rancher/49-overprovision-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/49-overprovision-harvester/ - import harvester into rancher over-provision the connected harvester cluster (i.e. deploy large number of nodes) note: the number will depend on the resources available in the harvester cluster you&rsquo;ve imported. i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 4 vCPU 8GB ram to over-provision CPU i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 2 vCPU 10GB ram to over-provision CPU + <ol> <li>import harvester into rancher</li> <li>over-provision the connected harvester cluster (i.e. deploy large number of nodes)</li> <li>note: the number will depend on the resources available in the harvester cluster you&rsquo;ve imported.</li> <li>i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 4 vCPU 8GB ram to over-provision CPU</li> <li>i.e. a harvester setup with 24 cores, 64 GB of ram, you could try provisioning a 3cp, 2cp, 2w cluster of size 2 vCPU 10GB ram to over-provision CPU</li> </ol> 50-Use fleet when a harvester cluster is imported to rancher https://harvester.github.io/tests/manual/harvester-rancher/50-fleet-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/50-fleet-with-harvester/ - deploy rancher with harvester enabled docker: &ndash;features=harvester=enabled helm: &ndash;set &rsquo;extraEnv[0].name=CATTLE_FEATURES&rsquo; &ndash;set &rsquo;extraEnv[0].value=harvester=enabled import a harvester setup go to fleet → repos -&gt; create validate that that the harvester cluster is NOT in the dropdown for cluster deployments validate that selecting the &lsquo;all clusters&rsquo; option for deployment does NOT deploy to the harvester cluster + <ol> <li>deploy rancher with harvester enabled</li> <li>docker: &ndash;features=harvester=enabled</li> <li>helm: &ndash;set &rsquo;extraEnv[0].name=CATTLE_FEATURES&rsquo; &ndash;set &rsquo;extraEnv[0].value=harvester=enabled</li> <li>import a harvester setup</li> <li>go to fleet → repos -&gt; create</li> <li>validate that that the harvester cluster is NOT in the dropdown for cluster deployments</li> <li>validate that selecting the &lsquo;all clusters&rsquo; option for deployment does NOT deploy to the harvester cluster</li> </ol> 51-Use harvester cloud provider to provision an LB - rke1 https://harvester.github.io/tests/manual/harvester-rancher/51-harvester-cloud-provider-loadbalancer-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/51-harvester-cloud-provider-loadbalancer-rke1/ - Related ticket: #1396 Integration Cloud Provider for RKE1 with Rancher Provision cluster using rke1 with harvester as the node driver Deploy cloud provider from App. Create a deployment with nginx:latest image. Create a Harvester load balancer to the pod of above deployment. Verify by clicking the service, if the load balancer is redirecting to the nginx home page. + <ul> <li>Related ticket: <a href="https://github.com/harvester/harvester/issues/1396">#1396</a> Integration Cloud Provider for RKE1 with Rancher</li> </ul> <ol> <li>Provision cluster using rke1 with harvester as the node driver</li> <li>Deploy cloud provider from App.</li> <li>Create a deployment with <code>nginx:latest</code> image.</li> <li>Create a Harvester load balancer to the pod of above deployment.</li> <li>Verify by clicking the service, if the load balancer is redirecting to the nginx home page.</li> </ol> 52-Use harvester cloud provider to provision an LB - rke2 https://harvester.github.io/tests/manual/harvester-rancher/52-harvester-cloud-provider-loadbalancer-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/52-harvester-cloud-provider-loadbalancer-rke2/ - Provision cluster using rke2 with harvester as the node driver Enable the cloud driver for harvester while provisioning the cluster Create a deployment with nginx:latest image. Create a Harvester load balancer to the pod of above deployment. Verify by clicking the service, if the load balancer is redirecting to the nginx home page. + <ol> <li>Provision cluster using rke2 with harvester as the node driver</li> <li>Enable the cloud driver for <code>harvester</code> while provisioning the cluster</li> <li>Create a deployment with <code>nginx:latest</code> image.</li> <li>Create a Harvester load balancer to the pod of above deployment.</li> <li>Verify by clicking the service, if the load balancer is redirecting to the nginx home page.</li> </ol> 53-Disable Harvester flag with Harvester cluster added https://harvester.github.io/tests/manual/harvester-rancher/53-disable-harvester-flag/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/53-disable-harvester-flag/ - Pre-requisites: Rancher with Harvester imported Disable Harvester feature flag on Rancher Expected Results Harvester should show up in cluster management Virtualization management tab should be hidden. + <p>Pre-requisites: Rancher with Harvester imported</p> <ol> <li>Disable Harvester feature flag on Rancher</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester should show up in cluster management</li> <li>Virtualization management tab should be hidden.</li> </ol> 54-Import Airgapped Harvester From the Online Rancher https://harvester.github.io/tests/manual/harvester-rancher/54-import-airgapped-harvester-from-online-rancher/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/54-import-airgapped-harvester-from-online-rancher/ - Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create ubuntu cloud image from URL Create virtual machine with name vlan1 and id: 1 Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server + <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pre><ol> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup squid HTTP proxy server</p> 55-Import Harvester to Rancher in airgapped different subnet https://harvester.github.io/tests/manual/harvester-rancher/55-import-harvester-rancher-airgapped-different-subnet/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/55-import-harvester-rancher-airgapped-different-subnet/ - Environment Setup Note: Harvester and Rancher are under different subnet, can access to each other Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; Create ubuntu cloud image from URL Create virtual machine and assign vlan network, confirm can get ip address Setup squid HTTP proxy server + <h3 id="environment-setup">Environment Setup</h3> <p><code>Note: Harvester and Rancher are under different subnet, can access to each other</code></p> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 56-Import Harvester to Rancher in airgapped different subnet https://harvester.github.io/tests/manual/harvester-rancher/56-import-harvester-rancher-online-different-subnet/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/56-import-harvester-rancher-online-different-subnet/ - Environment Setup Note: Harvester and Rancher are under different subnet, can access to each other Setup the online harvester Iso or vagrant ipxe install harvester on network with internet connection Enable vlan on harvester-mgmt Create virtual machine with name vlan1 and id: 1 Create ubuntu cloud image from URL Create virtual machine and assign vlan network, confirm can get ip address Setup the online rancher Install rancher on network with internet connection throug docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2. + <h3 id="environment-setup">Environment Setup</h3> <p><code>Note: Harvester and Rancher are under different subnet, can access to each other</code></p> <p>Setup the online harvester</p> <ol> <li>Iso or vagrant ipxe install harvester on network with internet connection</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Create ubuntu cloud image from URL</li> <li>Create virtual machine and assign vlan network, confirm can get ip address</li> </ol> <p>Setup the online rancher</p> 57-Import airgapped harvester from airgapped rancher with Proxy https://harvester.github.io/tests/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy/ - Related task: #1052 Test Air gap with Rancher integration Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting.xml file Set offline: true Use ipxe vagrant example to setup a 3 nodes cluster Enable vlan on harvester-mgmt Now harvester dashboard page will out of work Create virtual machine with name vlan1 and id: 1 Open Settings, edit http-proxy with the following values HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1052">#1052</a> Test Air gap with Rancher integration</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> <ol> <li>Fetch ipxe vagrant example with new offline feature <a href="https://github.com/harvester/ipxe-examples/pull/32">https://github.com/harvester/ipxe-examples/pull/32</a></li> <li>Edit the setting.xml file</li> <li>Set offline: <code>true</code></li> <li>Use ipxe vagrant example to setup a 3 nodes cluster</li> <li>Enable vlan on <code>harvester-mgmt</code></li> <li>Now harvester dashboard page will out of work</li> <li>Create virtual machine with name <code>vlan1</code> and id: <code>1</code></li> <li>Open Settings, edit <code>http-proxy</code> with the following values</li> </ol> <pre tabindex="0"><code>HTTP_PROXY=http://proxy-host:port HTTPS_PROXY=http://proxy-host:port NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.0.0/16,cattle-system.svc,.svc,.cluster.local,&lt;internal domain&gt; </code></pr 58-Negative-Fully power cycle harvester node machine should recover RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/58-negative-fully-power-cycle-harvester-node-machine-should-recover-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/58-negative-fully-power-cycle-harvester-node-machine-should-recover-rke2-cluster/ - Related issue: #1561 Fully shutdown then power on harvester node machine can&rsquo;t get provisioned RKE2 cluster back to work Related issue: #1428 rke2-coredns-rke2-coredns-autoscaler timeout Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Step Prepare a 3 nodes harvester cluster (provo bare machine) Enable virtual network with harvester-mgmt Create vlan1 with id 1 Import harvester from rancher and create cloud credential Provision a RKE2 cluster with vlan 1 Wait for build up ready Shutdown harvester node 3 Shutdown harvester node 2 Shutdown harvester node 1 Wait for 20 minutes Power on node 1, wait 10 seconds Power on node 2, wait 10 seconds Power on node 3 Wait for harvester startup complete Wait for RKE2 cluster back to work Check node and VIP accessibility Check the rke2-coredns pod status kubectl get pods --all-namespaces | grep rke2-coredns Expected Results RKE2 cluster on harvester can recover to Active status + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1561">#1561</a> Fully shutdown then power on harvester node machine can&rsquo;t get provisioned RKE2 cluster back to work</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1428">#1428</a> rke2-coredns-rke2-coredns-autoscaler timeout</p> </li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-step">Verification Step</h2> <ol> <li>Prepare a 3 nodes harvester cluster (provo bare machine)</li> <li>Enable virtual network with harvester-mgmt</li> <li>Create vlan1 with id <code>1</code></li> <li>Import harvester from rancher and create cloud credential</li> <li>Provision a RKE2 cluster with vlan <code>1</code></li> <li>Wait for build up ready</li> <li>Shutdown harvester node 3</li> <li>Shutdown harvester node 2</li> <li>Shutdown harvester node 1</li> <li>Wait for 20 minutes</li> <li>Power on node 1, wait 10 seconds</li> <li>Power on node 2, wait 10 seconds</li> <li>Power on node 3</li> <li>Wait for harvester startup complete</li> <li>Wait for RKE2 cluster back to work</li> <li>Check node and VIP accessibility</li> <li>Check the rke2-coredns pod status <code>kubectl get pods --all-namespaces | grep rke2-coredns</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li> <p>RKE2 cluster on harvester <code>can recover</code> to <code>Active</code> status</p> 59-Create K3s Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/59-create-k3s-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/59-create-k3s-kubernetes-cluster/ - Click Cluster Management Click Cloud Credentials Click create and select Harvester Input credential name Select existing cluster in the Imported Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Click Show Advanced Add the following user data: password: 123456 chpasswd: { expire: false } ssh_pwauth: true Click the drop down Kubernetes version list + <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click create and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imported Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li> <p>Click Clusters</p> </li> <li> <p>Click Create</p> </li> <li> <p>Toggle RKE2/K3s</p> </li> <li> <p>Select Harvester</p> </li> <li> <p>Input <code>Cluster Name</code></p> </li> <li> <p>Select <code>default</code> namespace</p> </li> <li> <p>Select ubuntu image</p> </li> <li> <p>Select network <code>vlan1</code></p> 60-Delete K3s Kubernetes Cluster https://harvester.github.io/tests/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster/ - Open Cluster Management Check provisioned K3s cluster Click Delete from menu Expected Results Can remove K3s Cluster and disapper on Cluster page K3s Cluster will be removed from rancher menu under explore cluster K3s virtual machine should be also be removed from Harvester + <ol> <li>Open Cluster Management</li> <li>Check provisioned K3s cluster</li> <li>Click <code>Delete</code> from menu</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can remove K3s Cluster and disapper on Cluster page</li> <li>K3s Cluster will be removed from rancher menu under explore cluster</li> <li>K3s virtual machine should be also be removed from Harvester</li> </ol> 61-Deploy Harvester cloud provider to k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster/ - Related task: #1812 K3s cloud provider and csi driver support Environment Setup Docker install rancher v2.6.4 Create one node harvester with enough resource Verify steps Follow step 1~13 in tets plan 59-Create K3s Kubernetes Cluster Click the Edit yaml button Set disable-cloud-provider: true to disable default k3s cloud provider. Add cloud-provider=external to use harvester cloud provider. Create K3s cluster Download the Generate addon configuration for cloud provider Download Harvester kubeconfig and add into your local ~/. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/1812">#1812</a> K3s cloud provider and csi driver support</li> </ul> <h3 id="environment-setup">Environment Setup</h3> <ol> <li>Docker install rancher v2.6.4</li> <li>Create one node harvester with enough resource</li> </ol> <h3 id="verify-steps">Verify steps</h3> <p>Follow step <strong>1~13</strong> in tets plan <code>59-Create K3s Kubernetes Cluster</code></p> <ol> <li>Click the Edit yaml button <img src="https://user-images.githubusercontent.com/29251855/166190410-47331a84-1d4e-4478-9d85-e68a3da91626.png" alt="image"></li> <li>Set <code>disable-cloud-provider: true</code> to disable default k3s cloud provider. <img src="https://user-images.githubusercontent.com/29251855/158510820-4d8a0021-1675-4c92-86b9-a6427f2e382b.png" alt="image"></li> <li>Add <code>cloud-provider=external</code> to use harvester cloud provider. <img src="https://user-images.githubusercontent.com/29251855/158511002-47a4a532-7f67-4eb0-8da4-074c6d9752e9.png" alt="image"></li> <li>Create K3s cluster <img src="https://user-images.githubusercontent.com/29251855/158511706-1c0c6af5-8909-4b1d-bc2a-0fa2fa26e000.png" alt="image"></li> <li>Download the <a href="https://github.com/harvester/cloud-provider-harvester/blob/master/deploy/generate_addon.sh">Generate addon configuration</a> for cloud provider</li> <li>Download Harvester kubeconfig and add into your local ~/.kube/config file</li> <li>Generate K3s kubeconfig by running generate addon script <code> ./deploy/generate_addon.sh &lt;k3s cluster name&gt; &lt;namespace&gt;</code> e.g <code>./generate_addon.sh k3s-focal-cloud-provider default</code></li> <li>Copy the kubeconfig content</li> <li>ssh to K3s VM <img src="https://user-images.githubusercontent.com/29251855/158534901-8fd22159-6a04-4592-ba25-ba4d73742a20.png" alt="image"></li> <li>Add kubeconfig content to <code>/etc/kubernetes/cloud-config</code> file, remember to align the yaml layout</li> <li>Install Harvester cloud provider <img src="https://user-images.githubusercontent.com/29251855/158512528-42ff575a-87a6-4424-bfb5-fa7af94ea74d.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/158512667-18b0249c-f859-4ae4-96b7-42ce873cb97a.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can install the Harvester cloud provider on k3s cluster correctly <img src="https://user-images.githubusercontent.com/29251855/158512758-d06df2f6-7094-4d41-b960-d50b26cd23fb.png" alt="image"></li> </ol> 62-Configure the K3s "DHCP" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster Create Nginx workload for testing Create a test-nginx deployment with image nginx:latest. Add pod label test: test. Create a DHCP LoadBalancer Open Kubectl shell. Create test-dhcp-lb.yaml file. apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: dhcp name: test-dhcp-lb namespace: default spec: ports: - name: http nodePort: 30172 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer Run k apply -f test-dhcp-lb. + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> </ul> <h4 id="create-nginx-workload-for-testing">Create Nginx workload for testing</h4> <ol> <li>Create a test-nginx deployment with image nginx:latest. <img src="https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png" alt="image"></li> <li>Add pod label test: test. <img src="https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png" alt="image"> <img src="https://user-images.githubusercontent.com/29251855/158513105-09ab472b-7cd4-4352-b4e1-84f673ee7088.png" alt="image"></li> </ol> <h4 id="create-a-dhcp-loadbalancer">Create a DHCP LoadBalancer</h4> <ol> <li>Open Kubectl shell.</li> <li>Create <code>test-dhcp-lb.yaml</code> file. <pre tabindex="0"><code>apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: dhcp name: test-dhcp-lb namespace: default spec: ports: - name: http nodePort: 30172 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer </code></pr 62-Configure the K3s "DHCP" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/64-configure-k3s-dhcp-lb-healcheck/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/64-configure-k3s-dhcp-lb-healcheck/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster 62-Configure the K3s &ldquo;DHCP&rdquo; LoadBalancer service A Working DHCP load balancer service created on K3s cluster Edit Load balancer config Check the &ldquo;Add-on Config&rdquo; tabs Configure port, IPAM and health check related setting on Add-on Config page Expected Results Can create load balance service correctly Can route workload to nginx deployment + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> <li>62-Configure the K3s &ldquo;DHCP&rdquo; LoadBalancer service</li> </ul> <ol> <li>A <code>Working</code> DHCP load balancer service created on K3s cluster</li> <li>Edit Load balancer config</li> <li>Check the &ldquo;Add-on Config&rdquo; tabs</li> <li>Configure <code>port</code>, <code>IPAM</code> and <code>health check</code> related setting on <code>Add-on Config</code> page <img src="https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create load balance service correctly</li> <li>Can route workload to nginx deployment</li> </ol> 63-Configure the K3s "Pool" LoadBalancer service https://harvester.github.io/tests/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster Create Nginx workload for testing Create a test-nginx deployment with image nginx:latest. Add pod label test: test. Create a Pool LoadBalancer Modify vip-pool in Harvester settings. Open Kubectl shell. Create test-pool-lb.yaml file. apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: pool name: test-pool-lb namespace: default spec: ports: - name: http nodePort: 32155 port: 8080 protocol: TCP targetPort: 80 selector: test: test sessionAffinity: None type: LoadBalancer Run k apply -f test-pool-lb. + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> </ul> <h4 id="create-nginx-workload-for-testing">Create Nginx workload for testing</h4> <ol> <li>Create a test-nginx deployment with image nginx:latest. <img src="https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png" alt="image"></li> <li>Add pod label test: test. <img src="https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png" alt="image"></li> </ol> <h4 id="create-a-pool-loadbalancer">Create a Pool LoadBalancer</h4> <ol> <li> <p>Modify vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></p> </li> <li> <p>Open Kubectl shell.</p> 65-Configure the K3s "Pool" LoadBalancer health check https://harvester.github.io/tests/manual/harvester-rancher/65-configure-k3s-pool-lb-healthcheck/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/65-configure-k3s-pool-lb-healthcheck/ - Prerequisite: Already provision K3s cluster and cloud provider on test plan 59-Create K3s Kubernetes Cluster 61-Deploy Harvester cloud provider to k3s Cluster 63-Configure the K3s &ldquo;Pool&rdquo; LoadBalancer service A Working DHCP load balancer service created on K3s cluster Edit Load balancer config Check the &ldquo;Add-on Config&rdquo; tabs Configure port, IPAM and health check related setting on Add-on Config page Expected Results Can create load balance service correctly Can route workload to nginx deployment + <p>Prerequisite: Already provision K3s cluster and cloud provider on test plan</p> <ul> <li>59-Create K3s Kubernetes Cluster</li> <li>61-Deploy Harvester cloud provider to k3s Cluster</li> <li>63-Configure the K3s &ldquo;Pool&rdquo; LoadBalancer service</li> </ul> <ol> <li>A <code>Working</code> DHCP load balancer service created on K3s cluster</li> <li>Edit Load balancer config</li> <li>Check the &ldquo;Add-on Config&rdquo; tabs</li> <li>Configure <code>port</code>, <code>IPAM</code> and <code>health check</code> related setting on <code>Add-on Config</code> page <img src="https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can create load balance service correctly</li> <li>Can route workload to nginx deployment</li> </ol> 66-Deploy Harvester csi driver to k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on K3s cluster Reference Document Deploying with Harvester K3s Node Driver Verify steps Prepare a Harvester cluster with enough cpu, memory and disks for K3s guest cluster Create a Rancher instance Import Harvester in Rancher and create cloud credential ssh to Harvester management node Extract the kubeconfig of Harvester with cat /etc/rancher/rke2/rke2.yaml Change the server value from https://127.0.0.1:6443/ to your VIP + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552842389">#2755</a> Steps to manually install Harvester csi-driver on K3s cluster</li> </ul> <h3 id="reference-document">Reference Document</h3> <p><a href="https://deploy-preview-309--harvester-preview.netlify.app/dev/rancher/csi-driver/#deploying-with-harvester-k3s-node-driver">Deploying with Harvester K3s Node Driver</a></p> <h3 id="verify-steps">Verify steps</h3> <ol> <li> <p>Prepare a Harvester cluster with enough cpu, memory and disks for K3s guest cluster</p> </li> <li> <p>Create a Rancher instance</p> </li> <li> <p>Import Harvester in Rancher and create cloud credential</p> </li> <li> <p>ssh to Harvester management node</p> 67-Harvester persistent volume on k3s Cluster https://harvester.github.io/tests/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on K3s cluster Verify steps Follow test case 66-Deploy Harvester csi driver to k3s Cluster to manually install csi-driver on k3s cluster Create a nginx deployment in Workload -&gt; Deployments Create a Persistent Volume Claims, select storage class to harvester Select the Single-Node Read/Write Open Harvester Volumes page, check the corresponding volume exists Click Execute shell to access Nginx container. + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552842389">#2755</a> Steps to manually install Harvester csi-driver on K3s cluster</li> </ul> <h3 id="verify-steps">Verify steps</h3> <p>Follow test case <code>66-Deploy Harvester csi driver to k3s Cluster</code> to manually install csi-driver on k3s cluster</p> <ol> <li> <p>Create a nginx deployment in Workload -&gt; Deployments</p> </li> <li> <p>Create a Persistent Volume Claims, select storage class to <code>harvester</code></p> </li> <li> <p>Select the <code>Single-Node Read/Write</code></p> </li> <li> <p>Open Harvester Volumes page, check the corresponding volume exists <img src="https://github.com/harvester/harvester/assets/29251855/8330c45f-ade1-4819-b2f0-5206e32123b6" alt="image"></p> 68-Fully airgapped rancher integrate with harvester with no proxy https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/ - Related task: #1808 RKE2 provisioning fails when Rancher has no internet access (air-gapped) Note1: in fully air gapped environment, you have to setup private docker hub registry and pull all rancher related offline image Note2: Please use SUSE SLES JeOS image, it have qemu-guest-agent already installed, thus the guest VM can get IP correctly Environment Setup Setup the airgapped harvester Fetch ipxe vagrant example with new offline feature https://github.com/harvester/ipxe-examples/pull/32 Edit the setting. + <ul> <li> <p>Related task: <a href="https://github.com/harvester/harvester/issues/1808">#1808</a> RKE2 provisioning fails when Rancher has no internet access (air-gapped)</p> </li> <li> <p><strong>Note1</strong>: in fully air gapped environment, you have to setup private docker hub registry and pull all rancher related offline image</p> </li> <li> <p><strong>Note2</strong>: Please use SUSE SLES JeOS image, it have <code>qemu-guest-agent</code> already installed, thus the guest VM can get IP correctly</p> </li> </ul> <h3 id="environment-setup">Environment Setup</h3> <p>Setup the airgapped harvester</p> 69-DHCP Harvester LoadBalancer service no health check https://harvester.github.io/tests/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters Provide Listening port and Target port Click Add-on Config + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li>Open <code>Global Settings</code> in hamburger menu</li> <li>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></li> <li>Change <code>ui-offline-preferred</code> to <code>Remote</code></li> <li>Refresh the current page (ctrl + r)</li> <li>Open provisioned RKE2 cluster from hamburger menu</li> <li>Drop down <code>Service Discovery</code></li> <li>Click <code>Services</code></li> <li>Click Create</li> <li>Select <code>Load Balancer</code></li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019" alt="image.png"></p> <ol> <li>Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters</li> <li>Provide Listening port and Target port</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873" alt="image.png"></p> 70-Pool LoadBalancer service no health check https://harvester.github.io/tests/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check/ - Prerequisite: Already provision RKE1/RKE2 cluster in previous test case Open Global Settings in hamburger menu Replace ui-dashboard-index to https://releases.rancher.com/harvester-ui/dashboard/latest/index.html Change ui-offline-preferred to Remote Refresh the current page (ctrl + r) Access Harvester dashboard UI Go to Settings Create a vip-pool in Harvester settings. Open provisioned RKE2 cluster from hamburger menu Drop down Service Discovery Click Services Click Create Select Load Balancer Given service name Provide Listening port and Target port Click Add-on Config + <p>Prerequisite: Already provision RKE1/RKE2 cluster in previous test case</p> <ol> <li> <p>Open <code>Global Settings</code> in hamburger menu</p> </li> <li> <p>Replace <code>ui-dashboard-index</code> to <code>https://releases.rancher.com/harvester-ui/dashboard/latest/index.html</code></p> </li> <li> <p>Change <code>ui-offline-preferred</code> to <code>Remote</code></p> </li> <li> <p>Refresh the current page (ctrl + r)</p> </li> <li> <p>Access Harvester dashboard UI</p> </li> <li> <p>Go to Settings</p> </li> <li> <p>Create a vip-pool in Harvester settings. <img src="https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png" alt="image"></p> </li> <li> <p>Open provisioned RKE2 cluster from hamburger menu</p> 71-Manually Deploy Harvester csi driver to RKE2 Cluster https://harvester.github.io/tests/manual/harvester-rancher/71-manually-deploy-csi-driver-to-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/71-manually-deploy-csi-driver-to-rke2-cluster/ - Related task: #2755 Steps to manually install Harvester csi-driver on RKE2 cluster Reference Document Deploying with Harvester RKE2 Node Driver Verify steps ssh to Harvester management node Extract the kubeconfig of Harvester with cat /etc/rancher/rke2/rke2.yaml Change the server value from https://127.0.0.1:6443/ to your VIP Copy the kubeconfig and add into your local ~/.kube/config file Import Harvester in Rancher Create cloud credential Provision a RKE2 cluster Provide the login credential in user data + <ul> <li>Related task: <a href="https://github.com/harvester/harvester/issues/2755#issuecomment-1552839577">#2755</a> Steps to manually install Harvester csi-driver on RKE2 cluster</li> </ul> <h3 id="reference-document">Reference Document</h3> <p><a href="https://deploy-preview-309--harvester-preview.netlify.app/dev/rancher/csi-driver/#deploying-with-harvester-rke2-node-driver">Deploying with Harvester RKE2 Node Driver</a></p> <h3 id="verify-steps">Verify steps</h3> <ol> <li> <p>ssh to Harvester management node</p> </li> <li> <p>Extract the kubeconfig of Harvester with <code>cat /etc/rancher/rke2/rke2.yaml</code></p> </li> <li> <p>Change the server value from https://127.0.0.1:6443/ to your VIP</p> </li> <li> <p>Copy the kubeconfig and add into your local ~/.kube/config file</p> 72-Use ipxe example to test fully airgapped rancher integration https://harvester.github.io/tests/manual/harvester-rancher/72-ipxe-auto-airgapped-rancher-integrate-harvester-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/72-ipxe-auto-airgapped-rancher-integrate-harvester-/ - Related task: #1808 RKE2 provisioning fails when Rancher has no internet access (air-gapped) Note1: In this test, we use vagrant-pxe-airgap-harvester to automatically provide the fully airgapped environment Note1: Compared to test case 68, we don&rsquo;t need to manually create a separate VM for the Rancher instance and docker private registry, all the prerequisite environment can be done with the vagrant-pxe-airgap-harvester solution Environment Setup Phase 1: Create airgapped Harvester cluster, Rancher and private registry Clone the latest ipxe-example which include the vagrant-pxe-airgap-harvester Follow the Sample Host Loadout and Prerequisites in readme to prepare the prerequisite package If you use Opensuse Leap operating system, you may need to comment out the following line in Vagrantfile file # libvirt. + <ul> <li> <p>Related task: <a href="https://github.com/harvester/harvester/issues/1808">#1808</a> RKE2 provisioning fails when Rancher has no internet access (air-gapped)</p> </li> <li> <p><strong>Note1</strong>: In this test, we use <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-airgap-harvester">vagrant-pxe-airgap-harvester</a> to automatically provide the fully airgapped environment</p> </li> <li> <p><strong>Note1</strong>: Compared to test case 68, we don&rsquo;t need to manually create a separate VM for the Rancher instance and docker private registry, all the prerequisite environment can be done with the <code>vagrant-pxe-airgap-harvester</code> solution</p> Check can apply the resource quota limit to project and namespace https://harvester.github.io/tests/manual/harvester-rancher/check-can-apply-the-resource-quota-limit-to-project-and-namespace-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/check-can-apply-the-resource-quota-limit-to-project-and-namespace-/ - Related issues: #1454 Incorrect memory unit conversion in namespace resource quota Category: Rancher Integration Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Verification Steps Access Rancher dashboard Open Cluster management -&gt; Explore the active cluster Create a new project test-1454-proj in Projects/Namespaces Set resource quota for the project Memory Limit: Project Limit: 512 Namespace default limit: 256 Memory Reservation: Project Limit: 256 Namespace default limit: 128 Click create namespace test-1454-ns under project test-1454-proj Click Kubectl Shell and run the following command kubectl get ns kubectl get quota -n test-1454-ns Check the output Click Workload -&gt; Deployments -&gt; Create Given the Name, Namespace and Container image Click Create Expected Results Based on configured project resource limit and namespace default limit, + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1454">#1454</a> Incorrect memory unit conversion in namespace resource quota</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="verification-steps">Verification Steps</h2> <ol> <li>Access Rancher dashboard</li> <li>Open Cluster management -&gt; Explore the active cluster</li> <li>Create a new project <code>test-1454-proj</code> in Projects/Namespaces</li> <li>Set resource quota for the project</li> </ol> <ul> <li>Memory Limit: <ul> <li>Project Limit: 512</li> <li>Namespace default limit: 256</li> </ul> </li> <li>Memory Reservation: <ul> <li>Project Limit: 256</li> <li>Namespace default limit: 128</li> </ul> </li> </ul> <ol> <li>Click create namespace <code>test-1454-ns</code> under project <code>test-1454-proj</code></li> <li>Click <code>Kubectl Shell</code> and run the following command</li> </ol> <ul> <li>kubectl get ns</li> <li>kubectl get quota -n test-1454-ns</li> </ul> <ol> <li>Check the output</li> <li>Click <code>Workload</code> -&gt; <code>Deployments</code> -&gt; <code>Create</code></li> <li>Given the <code>Name</code>, <code>Namespace</code> and <code>Container image</code> <img src="https://user-images.githubusercontent.com/29251855/143847775-eb84fa49-54d5-4001-a210-cbd8ed1235d1.png" alt="image"></li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Based on configured project resource limit and namespace default limit,</p> Check default and customized project and namespace details page https://harvester.github.io/tests/manual/harvester-rancher/check-default-customized-project-and-namespace-details-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/check-default-customized-project-and-namespace-details-page/ - Related issue: #1574 Multi-cluster projectNamespace details page error Category: Rancher Integration Environment setup Install rancher 2.6.3 by docker docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Verification Steps Import harvester from rancher dashboard Access harvester from virtualization management page Create several new projects Create several new namespaces under each new projects Access all default and self created namespace Check can display namespace details Check all new namespaces can display correctly under each projects Expected Results Access harvester from rancher virtualization management page Click any namespace in the Projects/Namespace can display details correctly with no page error Default namespace Customized namespace Newly created namespace will display under project list + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1574">#1574</a> Multi-cluster projectNamespace details page error</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install rancher <code>2.6.3</code> by docker</li> </ol> <pre tabindex="0"><code>docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="verification-steps">Verification Steps</h2> <ol> <li>Import harvester from rancher dashboard</li> <li>Access harvester from virtualization management page</li> <li>Create several new projects</li> <li>Create several new namespaces under each new projects</li> <li>Access all default and self created namespace</li> <li>Check can display namespace details</li> <li>Check all new namespaces can display correctly under each projects</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Access harvester from rancher virtualization management page Click any namespace in the Projects/Namespace can display details correctly with no page error</li> </ol> <p>Default namespace <img src="https://user-images.githubusercontent.com/29251855/143835124-6f81b902-e0b1-4cbd-8e1f-e818ee033fdb.png" alt="image"></p> Create a VM through the Rancher dashboard https://harvester.github.io/tests/manual/harvester-rancher/1613-create-vm-through-rancher-dashboard/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1613-create-vm-through-rancher-dashboard/ - Related issues: #1613 VM memory shows NaN Gi Verification Steps import harvester into rancher&rsquo;s virtualization management Load Harvester dashboard by going to virtualization management then clicking on harvester cluster Create a new VM on Harvester Validate the following in the VM list page, the form, and YAML&gt; Memory CPU Disk space Expected Results VM should create VM should start All specifications should show correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1613">#1613</a> VM memory shows NaN Gi</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>import harvester into rancher&rsquo;s virtualization management</li> <li>Load Harvester dashboard by going to virtualization management then clicking on harvester cluster</li> <li>Create a new VM on Harvester</li> <li>Validate the following in the VM list page, the form, and YAML&gt; <ol> <li>Memory</li> <li>CPU</li> <li>Disk space</li> </ol> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should start</li> <li>All specifications should show correctly</li> </ol> Create RKE2 cluster with no cloud provider https://harvester.github.io/tests/manual/harvester-rancher/1577-create-rke2-cluster-no-cloud-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1577-create-rke2-cluster-no-cloud-provider/ - Related issues: #1577 Option to disable load balancer feature in cloud provider Verification Steps Click Cluster Management Click Cloud Credentials Click createa and select Harvester Input credential name Select existing cluster in the Imprted Cluster list Click Create Click Clusters Click Create Toggle RKE2/K3s Select Harvester Input Cluster Name Select default namespace Select ubuntu image Select network vlan1 Input SSH User: ubuntu Select None for cloud provider Click Create Wait for RKE2 cluster provisioning complete (~20min) Expected Results Provision RKE2 cluster successfully with Running status Can acccess RKE2 cluster to check all resources and services by clicking manage + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1577">#1577</a> Option to disable load balancer feature in cloud provider</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Click Cluster Management</li> <li>Click Cloud Credentials</li> <li>Click createa and select <code>Harvester</code></li> <li>Input credential name</li> <li>Select existing cluster in the <code>Imprted Cluster</code> list</li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa" alt="image.png"></p> <ol> <li>Click Clusters</li> <li>Click Create</li> <li>Toggle RKE2/K3s</li> <li>Select Harvester</li> <li>Input <code>Cluster Name</code></li> <li>Select <code>default</code> namespace</li> <li>Select ubuntu image</li> <li>Select network <code>vlan1</code></li> <li>Input SSH User: <code>ubuntu</code></li> <li>Select <code>None</code> for cloud provider <img src="https://user-images.githubusercontent.com/4569037/142971322-f34a9c6d-095e-4dcc-9981-103bee4453ff.png" alt="image"></li> <li>Click Create</li> </ol> <p><img src="https://images.zenhubusercontent.com/61519853321ea20d65443929/cbd9cc9b-60fb-4e81-985a-13fcaa88fa2f" alt="image.png"></p> Delete 3 node RKE2 cluster https://harvester.github.io/tests/manual/harvester-rancher/1311-delete-3-node-rke2-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1311-delete-3-node-rke2-cluster/ - Related issues: #1311 Deleting a cluster in rancher dashboard doesn&rsquo;t fully remove Verification Steps Create 3 node RKE2 cluster on Harvester through node driver with Rancher Wait fo the nodes to create, but not fully provision Delete the cluster Wait for them to be removed from Harvester Check Rancher cluster management Expected Results Cluster should be removed from Rancher VMs should be removed from Harvester + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1311">#1311</a> Deleting a cluster in rancher dashboard doesn&rsquo;t fully remove</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create 3 node RKE2 cluster on Harvester through node driver with Rancher</li> <li>Wait fo the nodes to create, but not fully provision</li> <li>Delete the cluster</li> <li>Wait for them to be removed from Harvester</li> <li>Check Rancher cluster management</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Cluster should be removed from Rancher</li> <li>VMs should be removed from Harvester</li> </ol> Provision RKE2 cluster with resource quota configured https://harvester.github.io/tests/manual/harvester-rancher/provision-rke2-cluster-with-resource-quota-configured/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/provision-rke2-cluster-with-resource-quota-configured/ - Related issues: #1455 Node driver provisioning fails when resource quota configured in project Related issues: #1449 Incorrect naming of project resource configuration Category: Rancher Integration Environment setup Install the latest rancher from docker command $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 Test Scenarios Scenario 1: Project with resource quota: CPU Limit / CPU Reservation: 6000 / 6144 Memory Limit / Memory Reservation: 6000 / 6144 Scenario 2: + <ul> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1455">#1455</a> Node driver provisioning fails when resource quota configured in project</p> </li> <li> <p>Related issues: <a href="https://github.com/harvester/harvester/issues/1449">#1449</a> Incorrect naming of project resource configuration</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Rancher Integration</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Install the latest rancher from docker command</li> </ol> <pre tabindex="0"><code>$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.3 </code></pre><h2 id="test-scenarios">Test Scenarios</h2> <ul> <li> <p>Scenario 1:</p> Rancher Resource quota management https://harvester.github.io/tests/manual/harvester-rancher/resource_quota/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/resource_quota/ - Ref: https://github.com/harvester/harvester/issues/1450 Verify Items Project&rsquo;s Resource quotas can be updated correctly Namespace Default Limit should be assigned as the Project configured Namespace moving between projects should work correctly Case: Create Namespace with Resource quotas Install Harvester with any nodes Install Rancher Login to Rancher, import Harvester from Virtualization Management Access Harvester dashboard via Virtualization Management Navigate to Project/Namespaces, Create Project A with Resource quotas Create Namespace N1 based on Project A The Default value of Resource Quotas should be the same as Namespace Default Limit assigned in Project A Modifying resource limit should work correctly (when increasing/decreasing, the value should increased/decreased) After N1 Created, Click Edit Config on N1 resource limit should be the same as we assigned Increase/decrease resource limit then Save Click Edit Config on N1, resource limit should be the same as we assigned Click Edit Config on N1, then increase resource limit exceeds Project A&rsquo;s Limit Click Save Button, error message should shown. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1450">https://github.com/harvester/harvester/issues/1450</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Project&rsquo;s Resource quotas can be updated correctly</li> <li><strong>Namespace Default Limit</strong> should be assigned as the Project configured</li> <li>Namespace moving between projects should work correctly</li> </ul> <h2 id="case-create-namespace-with-resource-quotas">Case: Create Namespace with Resource quotas</h2> <ol> <li>Install Harvester with any nodes</li> <li>Install Rancher</li> <li>Login to Rancher, import Harvester from <em>Virtualization Management</em></li> <li>Access Harvester dashboard via <em>Virtualization Management</em></li> <li>Navigate to <em>Project/Namespaces</em>, Create Project <code>A</code> with Resource quotas</li> <li>Create Namespace <code>N1</code> based on Project <code>A</code></li> <li>The Default value of Resource Quotas should be the same as <strong>Namespace Default Limit</strong> assigned in Project <code>A</code></li> <li>Modifying <strong>resource limit</strong> should work correctly (when increasing/decreasing, the value should increased/decreased)</li> <li>After <code>N1</code> Created, Click <strong>Edit Config</strong> on <code>N1</code></li> <li><strong>resource limit</strong> should be the same as we assigned</li> <li>Increase/decrease <strong>resource limit</strong> then Save</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, <strong>resource limit</strong> should be the same as we assigned</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, then increase <strong>resource limit</strong> exceeds Project <code>A</code>&rsquo;s Limit</li> <li>Click <strong>Save</strong> Button, error message should shown.</li> <li>Click <strong>Edit Config</strong> on <code>N1</code>, then change the <strong>Project</strong> to <code>Default</code></li> <li>The Namespace <code>N1</code> should be moved to Project <code>Default</code></li> </ol> Reboot a cluster and check VIP https://harvester.github.io/tests/manual/harvester-rancher/1669-reboot-cluster-check-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher/1669-reboot-cluster-check-vip/ - Related issues: #1669 Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent) Verification Steps Enable VLAN with NIC harvester-mgmt Create VLAN 1 Disable VLAN Enable VLAN again shutdown node 3, 2, 1 server machine Wait for 15 minutes Power on node 1 server machine, wait for 20 seconds Power on node 2 server machine, wait for 20 seconds Power on node 3 server machine Check if you can access VIP and each node IP Expected Results VIP should load the page and show on every node in the terminal + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1669">#1669</a> Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent)</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable VLAN with NIC harvester-mgmt</li> <li>Create VLAN 1</li> <li>Disable VLAN</li> <li>Enable VLAN again</li> <li>shutdown node 3, 2, 1 server machine</li> <li>Wait for 15 minutes</li> <li>Power on node 1 server machine, wait for 20 seconds</li> <li>Power on node 2 server machine, wait for 20 seconds</li> <li>Power on node 3 server machine</li> <li>Check if you can access VIP and each node IP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should load the page and show on every node in the terminal</li> </ol> diff --git a/manual/harvester-rancher2-terraform-integration/index.xml b/manual/harvester-rancher2-terraform-integration/index.xml index e56ab2d97..5f2d668fc 100644 --- a/manual/harvester-rancher2-terraform-integration/index.xml +++ b/manual/harvester-rancher2-terraform-integration/index.xml @@ -12,7 +12,7 @@ https://harvester.github.io/tests/manual/harvester-rancher2-terraform-integration/terraform_rancher2_provider_testing/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/harvester-rancher2-terraform-integration/terraform_rancher2_provider_testing/ - Ref: https://github.com/rancher/terraform-provider-rancher2/issues/1009 Test Information Environment Rancher: v2.7.X Environment for Harvester: bare-metal or qemu Harvester Version: v1.1.X ui-source Option: Auto Rancher2 Terraform Provider Plugin: v3.0.X rancher2 Test Setup Rancher2 Terraform Provider: make sure terraform is installed at version equal or greater than 1.3.9, ie: sudo apt install terraform utilize the setup-provider.sh script from the rancher2 terraform provider repo if testing an rc it would look something like ./setup-provider.sh rancher2 v3.0.0-rc1 ensure the provider is installed, can cross check the directory structures under ~/. + <p>Ref: <a href="https://github.com/rancher/terraform-provider-rancher2/issues/1009">https://github.com/rancher/terraform-provider-rancher2/issues/1009</a></p> <h2 id="test-information">Test Information</h2> <ul> <li>Environment Rancher: v2.7.X</li> <li>Environment for Harvester: bare-metal or qemu</li> <li>Harvester Version: v1.1.X</li> <li><strong>ui-source</strong> Option: <strong>Auto</strong></li> <li>Rancher2 Terraform Provider Plugin: v3.0.X <a href="https://github.com/rancher/terraform-provider-rancher2/releases">rancher2</a></li> </ul> <h3 id="test-setup-rancher2-terraform-provider">Test Setup Rancher2 Terraform Provider:</h3> <ol> <li>make sure terraform is installed at version equal or greater than 1.3.9, ie: <code>sudo apt install terraform</code></li> <li>utilize the <a href="https://github.com/rancher/terraform-provider-rancher2/blob/master/setup-provider.sh">setup-provider.sh</a> script from the rancher2 terraform provider repo if testing an rc it would look something like <code>./setup-provider.sh rancher2 v3.0.0-rc1</code></li> <li>ensure the provider is installed, can cross check the directory structures under <code>~/.terraform.d/plugins/terraform.local</code></li> </ol> <h3 id="setup-rancher-v27x">Setup Rancher v2.7.X</h3> <ol> <li>build an API Key for Rancher utilizing <a href="https://ranchermanager.docs.rancher.com/reference-guides/user-settings/api-keys">this doc</a>, keeping reference of the: access-key, secret-key, &amp; bearer token</li> <li>import a harvester cluster into Rancher v2.7.X, keep reference of that Harvester cluster name</li> </ol> <h2 id="additional-setup">Additional Setup</h2> <ol> <li>build out a temporary directory to preform this deep integration testing</li> <li>create the following two folders of something like: <ul> <li><code>harvester-setup</code></li> <li><code>rancher-setup</code></li> </ul> </li> <li>inside each folder create a: <ul> <li><code>main.tf</code></li> <li><code>provider.tf</code></li> </ul> </li> </ol> <h3 id="harvester-setup">Harvester Setup</h3> <ol> <li>download the Harvester kubeconfig file into the <code>harvester-setup</code> folder</li> <li>inside the <code>harvester-setup</code> folder in the <code>provider.tf</code> file add:</li> </ol> <pre tabindex="0"><code>terraform { required_version = &#34;&gt;= 0.13&#34; required_providers { harvester = { source = &#34;harvester/harvester&#34; version = &#34;0.6.1&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;&lt;the kubeconfig file path of the harvester cluster&gt;&#34; } </code></pr diff --git a/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/index.html b/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/index.html index 7162dbe33..fd4ef847b 100644 --- a/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/index.html +++ b/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/index.html @@ -1950,111 +1950,111 @@

                    Environment setup

                    Scenario 1: Node type: Create

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester installAuto provision
                    BIOSNo MBRset
                    FirmwareHarvester installAuto provision
                    BIOSNo MBRset

                    Node type: Join

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester InstallAuto provision
                    BIOSNo MBRNo set
                    FirmwareHarvester InstallAuto provision
                    BIOSNo MBRNo set
                  4. Scenario 2: Node type: Create

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester installAuto provision
                    BIOSMBRset
                    FirmwareHarvester installAuto provision
                    BIOSMBRset

                    Node type: Join

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester InstallAuto provision
                    BIOSMBRNo set
                    FirmwareHarvester InstallAuto provision
                    BIOSMBRNo set
                  5. Scenario 3: Node type: Create

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester installAuto provision
                    UEFIGPTset
                    FirmwareHarvester installAuto provision
                    UEFIGPTset

                    Node type: Join

                    - - - - - - - - - - - - - - + + + + + + + + + + + + + +
                    FirmwareHarvester InstallAuto provision
                    UEFIGPTNo set
                    FirmwareHarvester InstallAuto provision
                    UEFIGPTNo set
                diff --git a/manual/hosts/index.xml b/manual/hosts/index.xml index e2babc4f9..a48709231 100644 --- a/manual/hosts/index.xml +++ b/manual/hosts/index.xml @@ -12,336 +12,336 @@ https://harvester.github.io/tests/manual/hosts/1623-add-disk-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1623-add-disk-to-host/ - Related issues: #1623 Unable to add additional disks to host config Environment setup Add Disk that isn&rsquo;t assigned to host Verification Steps Head to &ldquo;Hosts&rdquo; page Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab Validate: Open dropdown and see no disks Attach a disk on that node Validate: Open dropdown and see some disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Detach a disk on that node Validate: Open dropdown and see no disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Expected Results Disk space should show appropriately + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1623">#1623</a> Unable to add additional disks to host config</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Add Disk that isn&rsquo;t assigned to host</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Head to &ldquo;Hosts&rdquo; page</li> <li>Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab</li> <li>Validate: Open dropdown and see no disks</li> <li>Attach a disk on that node</li> <li>Validate: Open dropdown and see some disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> <li>Detach a disk on that node</li> <li>Validate: Open dropdown and see no disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disk space should show appropriately <img src="https://user-images.githubusercontent.com/83787952/146289651-3c8b8da7-5ba1-4a15-aa4f-32f24af4b8dc.png" alt="image"></li> </ol> Agent Node should not rely on specific master Node https://harvester.github.io/tests/manual/hosts/agent_node_connectivity/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/agent_node_connectivity/ - Ref: https://github.com/harvester/harvester/issues/1521 Verify Items Agent Node should keep connection when any master Node is down Case: Agent Node&rsquo;s connecting status Install Harvester with 4 nodes which joining node MUST join by VIP (point server-url to use VIP) Make sure all nodes are ready Login to dashboard, check host state become Active SSH to the 1st node, run command kubectl get node to check all STATUS should be Ready SSH to agent nodes which ROLES IS &lt;none&gt; in Step 2i&rsquo;s output Output should contains VIP in the server URL, by run command cat /etc/rancher/rke2/config. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1521">https://github.com/harvester/harvester/issues/1521</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Agent Node should keep connection when any master Node is down</li> </ul> <h2 id="case-agent-nodes-connecting-status">Case: Agent Node&rsquo;s connecting status</h2> <ol> <li>Install Harvester with 4 nodes which joining node MUST join by VIP (point <code>server-url</code> to use VIP)</li> <li>Make sure all nodes are ready <ol> <li>Login to dashboard, check host <strong>state</strong> become <code>Active</code></li> <li>SSH to the 1st node, run command <code>kubectl get node</code> to check all <strong>STATUS</strong> should be <code>Ready</code></li> </ol> </li> <li>SSH to agent nodes which <strong>ROLES</strong> IS <code>&lt;none&gt;</code> in <strong>Step 2i</strong>&rsquo;s output <ul> <li><input checked="" disabled="" type="checkbox"> Output should contains VIP in the server URL, by run command <code>cat /etc/rancher/rke2/config.yaml.d/90-harvester-vip.yaml</code></li> <li><input checked="" disabled="" type="checkbox"> Output should contain the line <code>server: https://127.0.0.1:6443</code>, by run command <code>cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig</code></li> <li><input checked="" disabled="" type="checkbox"> Output should contain the line <code>server: https://127.0.0.1:6443</code>, by run command <code>cat /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig</code></li> </ul> </li> <li>SSH to server nodes which <strong>ROLES</strong> contains <code>control-plane</code> in <strong>Step 2i</strong>&rsquo;s output <ul> <li><input checked="" disabled="" type="checkbox"> Check file should not exist in the path <code>/etc/rancher/rke2/config.yaml.d/90-harvester-vip.yaml</code></li> </ul> </li> <li>Shut down a server node, check following things <ul> <li><input checked="" disabled="" type="checkbox"> Host <strong>State</strong> should not be <code>Active</code> in dashboard</li> <li><input checked="" disabled="" type="checkbox"> Node <strong>STATUS</strong> should be <code>NotReady</code> in the command output of <code>kubectl get node</code></li> <li><input checked="" disabled="" type="checkbox"> <strong>STATUS</strong> of agent nodes should be <code>Ready</code> in the command output of <code>kubectl get node</code></li> </ul> </li> <li>Power on the server node, wait until it back to cluster</li> <li>repeat <strong>Step 5-6</strong> for other server nodes</li> </ol> Attach unpartitioned NVMe disks to host https://harvester.github.io/tests/manual/hosts/attach-unpartitioned-nvme-disks-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/attach-unpartitioned-nvme-disks-to-host/ - Related issues: #1414 Adding unpartitioned NVMe disks fails Category: Storage Verification Steps Use qemu-img create -f qcow2 command to create three disk image locally Shutdown target node VM machine Directly edit VM xml content in virt manager page Add to the first line Add the following line before the end of quote &lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme301.img,if=none,id=D22&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D22,serial=1234&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme302.img,if=none,id=D23&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D23,serial=1235&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme303. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1414">#1414</a> Adding unpartitioned NVMe disks fails</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Use <code>qemu-img create -f qcow2</code> command to create three disk image locally</li> <li>Shutdown target node VM machine</li> <li>Directly edit VM xml content in virt manager page</li> <li>Add <!-- raw HTML omitted --> to the first line</li> <li>Add the following line before the end of <!-- raw HTML omitted --> quote</li> </ol> <pre tabindex="0"><code>&lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme301.img,if=none,id=D22&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D22,serial=1234&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme302.img,if=none,id=D23&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D23,serial=1235&#34;/&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/home/davidtclin/Documents/Software/qemu_kvm/node_3/nvme303.img,if=none,id=D24&#34;/&gt; &lt;qemu:arg value=&#34;-device&#34;/&gt; &lt;qemu:arg value=&#34;nvme,drive=D24,serial=1236&#34;/&gt; &lt;/qemu:commandline&gt; </code></pr Automatically get VIP during PXE installation https://harvester.github.io/tests/manual/hosts/1410-pxe-installation-automatically-get-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1410-pxe-installation-automatically-get-vip/ - Related issues: #1410 Support getting VIP automatically during PXE boot installation Verification Steps Comment vip and vip_hw_addr in ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 Start vagrant-pxe-harvester Run kubectl get cm -n harvester-system vip Check whether we can get ip and hwAddress in it Run ip a show harvester-mgmt Check whether there are two IPs in it and one is the vip. Expected Results VIP should automatically be assigned + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1410">#1410</a> Support getting VIP automatically during PXE boot installation</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Comment <code>vip</code> and <code>vip_hw_addr</code> in <code>ipxe-examples/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2</code></li> <li>Start vagrant-pxe-harvester</li> <li>Run <code>kubectl get cm -n harvester-system vip</code> <ul> <li>Check whether we can get <code>ip</code> and <code>hwAddress</code> in it</li> </ul> </li> <li>Run <code>ip a show harvester-mgmt</code> <ul> <li>Check whether there are two IPs in it and one is the vip.</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should automatically be assigned</li> </ol> Check crash dump when there's a kernel panic https://harvester.github.io/tests/manual/hosts/1357-kernel-panic-check-crash-dump/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1357-kernel-panic-check-crash-dump/ - Related issues: #1357 Crash dump not written when kernel panic occurs Verification Steps Created new single node cluster with 16GB RAM Booted into debug mode from GRUB entry Created several VMs triggered kernel panic with echo c &gt;/proc/sysrq-trigger Waited for reboot Verified that dump was saved in /var/crash Expected Results dump should be saved in /var/crash + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1357">#1357</a> Crash dump not written when kernel panic occurs</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Created new single node cluster with 16GB RAM</li> <li>Booted into debug mode from GRUB entry</li> <li>Created several VMs</li> <li>triggered kernel panic with <code>echo c &gt;/proc/sysrq-trigger</code></li> <li>Waited for reboot</li> <li>Verified that dump was saved in <code>/var/crash</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>dump should be saved in <code>/var/crash</code></li> </ol> check detailed network status in host page https://harvester.github.io/tests/manual/hosts/check-detailed-network-status-in-host-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/check-detailed-network-status-in-host-page/ - Related issues: #531 Better error messages when misconfiguring multiple nics Category: Host Verification Steps Enable vlan cluster network setting and set a default network interface Wait a while for the setting take effect on all harvester nodes Click nodes on host page Check the network tab Expected Results On the Host view page, now we can see detailed network status including Name, Type, IP Address, Status etc.. Check all network interface can display Check the Name, Type, IP Address, Status display correct values + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/531">#531</a> Better error messages when misconfiguring multiple nics</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable vlan cluster network setting and set a default network interface</li> <li>Wait a while for the setting take effect on all harvester nodes</li> <li>Click nodes on host page</li> <li>Check the network tab</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>On the Host view page, now we can see detailed network status including <code>Name</code>, <code>Type</code>, <code>IP Address</code>, <code>Status</code> etc.. <img src="https://user-images.githubusercontent.com/29251855/141070311-55ec4382-d777-4289-91c7-cebe81db3356.png" alt="image"></p> Check Longhorn volume mount point https://harvester.github.io/tests/manual/hosts/1667-check-longhorn-volume-mount/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1667-check-longhorn-volume-mount/ - Related issues: #1667 data partition is not mounted to the LH path properly Verification Steps Install Harvester node in VM from ISO Check partitions with lsblk -f Verify mount point of /var/lib/longhorn Expected Results Mount point should show /var/lib/longhorn + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1667">#1667</a> data partition is not mounted to the LH path properly</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester node in VM from ISO</li> <li>Check partitions with <code>lsblk -f</code></li> <li>Verify mount point of <code>/var/lib/longhorn</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Mount point should show <code>/var/lib/longhorn</code> <img src="https://user-images.githubusercontent.com/83787952/146290004-0584f817-d9df-4f4d-9069-d3ed4199b30f.png" alt="image"></li> </ol> Check redirect for editing server URL setting https://harvester.github.io/tests/manual/hosts/1489-redirect-for-server-url-setting/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1489-redirect-for-server-url-setting/ - Related issues: #1489 Edit Advanced Setting option server-url will redirect to inappropriate page Verification Steps Install harvester Access harvester Edit server-url form settings Check server-url save, cancel, and back. Additional context: Expected Results URL should stay the same when navigating + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1489">#1489</a> Edit Advanced Setting option server-url will redirect to inappropriate page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install harvester</li> <li>Access harvester</li> <li>Edit server-url form settings</li> <li>Check server-url save, cancel, and back. Additional context: <img src="https://user-images.githubusercontent.com/18737885/140492691-969380aa-dbed-4999-9e90-e589dd93e4e4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>URL should stay the same when navigating</li> </ol> Cluster with Witness Node https://harvester.github.io/tests/manual/hosts/3266-witness-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/3266-witness-node/ - Witness node is a lightweight node only runs etcd which is not schedulable and also not for workloads. The main use case is to form a quorum with the other 2 nodes. Kubernetes need at least 3 etcd nodes to form a quorum, so Harvester also suggests using at least 3 nodes with similar hardware spec. This witness node feature aims for the edge case that user only have 2 powerful + 1 lightweight nodes thus helping benefit both cost and high availability. + <p>Witness node is a lightweight node only runs <strong>etcd</strong> which is not schedulable and also not for workloads. The main use case is to form a quorum with the other 2 nodes.</p> <p>Kubernetes need at least 3 <strong>etcd</strong> nodes to form a quorum, so Harvester also suggests using at least 3 nodes with similar hardware spec. This witness node feature aims for the edge case that user only have 2 powerful + 1 lightweight nodes thus helping benefit both cost and high availability.</p> Delete Host (e2e_be) https://harvester.github.io/tests/manual/hosts/delete-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete-host/ - Navigate to the Hosts page and select the node Click Delete Expected Results SSH to the node and check the nodes has components deleted. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH to the node and check the nodes has components deleted.</li> </ol> Delete host that has VMs on it https://harvester.github.io/tests/manual/hosts/delete-host-with-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete-host-with-vm/ - Navigate to the Hosts page and select the node Click Delete Expected Results An alert message should appear. If VM exists it should stop user to delete the node or move VM to other node. If VM is getting moved to another node and there is no space, it should stop user to delete the node. Existing bugs https://github.com/harvester/harvester/issues/1004 + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Delete</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>An alert message should appear.</li> <li>If VM exists it should stop user to delete the node or move VM to other node.</li> <li>If VM is getting moved to another node and there is no space, it should stop user to delete the node.</li> </ol> <h3 id="existing-bugs">Existing bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1004">https://github.com/harvester/harvester/issues/1004</a></p> Disk can only be added once on UI https://harvester.github.io/tests/manual/hosts/add_disk_on_ui/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/add_disk_on_ui/ - Ref: https://github.com/harvester/harvester/issues/1608 Verify Items NVMe disk can only be added once on UI Case: add new NVMe disk on dashboard UI Install Harvester with 2 nodes Power off 2nd node Update VM&rsquo;s xml definition (by using virsh edit or virt-manager) Create nvme.img block: dd if=/dev/zero of=/var/lib/libvirt/images/nvme.img bs=1M count=4096 change owner chown qemu:qemu /var/lib/libvirt/images/nvme.img update &lt;domain type=&quot;kvm&quot;&gt; to &lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt; append xml node into domain as below: &lt;qemu:commandline&gt; &lt;qemu:arg value=&#34;-drive&#34;/&gt; &lt;qemu:arg value=&#34;file=/var/lib/libvirt/images/nvme. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1608">https://github.com/harvester/harvester/issues/1608</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>NVMe disk can only be added once on UI</li> </ul> <h2 id="case-add-new-nvme-disk-on-dashboard-ui">Case: add new NVMe disk on dashboard UI</h2> <ol> <li>Install Harvester with 2 nodes</li> <li>Power off 2nd node</li> <li>Update VM&rsquo;s xml definition (by using <code>virsh edit</code> or virt-manager) <ul> <li>Create <strong>nvme.img</strong> block: <code>dd if=/dev/zero of=/var/lib/libvirt/images/nvme.img bs=1M count=4096</code></li> <li>change owner <code>chown qemu:qemu /var/lib/libvirt/images/nvme.img</code></li> <li>update <code>&lt;domain type=&quot;kvm&quot;&gt;</code> to <code>&lt;domain type=&quot;kvm&quot; xmlns:qemu=&quot;http://libvirt.org/schemas/domain/qemu/1.0&quot;&gt;</code></li> <li>append xml node into <strong>domain</strong> as below:</li> </ul> </li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:commandline&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-drive&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;file=/var/lib/libvirt/images/nvme.img,if=none,id=D22,format=raw&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;-device&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;qemu:arg</span> <span style="color:#a6e22e">value=</span><span style="color:#e6db74">&#34;nvme,drive=D22,serial=1234&#34;</span><span style="color:#f92672">/&gt;</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/qemu:commandline&gt;</span> </span></span></code></pr Disk devices used for VM storage should be globally configurable https://harvester.github.io/tests/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/disk-devices-used-for-vm-storage-globally-configurable/ - Related issue: #1241 Disk devices used for VM storage should be globally configurable Related issue: #1382 Exclude OS root disk and partitions on forced GPT partition Related issue: #1599 Extra disk auto provision from installation may cause NDM can&rsquo;t find a valid longhorn node to provision Category: Storage Test Scenarios (Checked means verification PASS) BIOS firmware + No MBR (Default) + Auto disk` provisioning config BIOS firmware + MBR + Auto disk provisioning config UEFI firmware + GPT (Default) + Auto disk provisioning config BIOS firmware + GPT (Default) +Auto Provisioning on harvester-config Environment setup Scenario 1: Node type: Create + <ul> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1241">#1241</a> Disk devices used for VM storage should be globally configurable</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1382">#1382</a> Exclude OS root disk and partitions on forced GPT partition</p> </li> <li> <p>Related issue: <a href="https://github.com/harvester/harvester/issues/1599">#1599</a> Extra disk auto provision from installation may cause NDM can&rsquo;t find a valid longhorn node to provision</p> </li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="test-scenarios">Test Scenarios</h2> <p>(Checked means verification <code>PASS</code>)</p> Download host YAML https://harvester.github.io/tests/manual/hosts/download-host-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/download-host-yaml/ - Navigate to the Hosts page and select the node Click Download Yaml Expected Results The Yaml should get downloaded. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Download Yaml</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The Yaml should get downloaded.</li> </ol> Edit Config (e2e_be) https://harvester.github.io/tests/manual/hosts/edit-config/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/edit-config/ - Navigate to the Hosts page and select the node Click edit config. Add description and other details Try to modify the network config Expected Results The edited values should be saved and reflected on the page. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click edit config.</li> <li>Add description and other details</li> <li>Try to modify the network config</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited values should be saved and reflected on the page.</li> </ol> Edit Config YAML (e2e_be) https://harvester.github.io/tests/manual/hosts/edit-config-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/edit-config-yaml/ - Navigate to the Hosts page and select the node Click edit config through YAML. Add description and other details Try to modify the network config Expected Results The edited values should be saved and reflected on the page. + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click edit config through YAML.</li> <li>Add description and other details</li> <li>Try to modify the network config</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited values should be saved and reflected on the page.</li> </ol> Host list should display the disk error message on failure https://harvester.github.io/tests/manual/hosts/host-list-should-display-disk-error-message-on-failure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/host-list-should-display-disk-error-message-on-failure/ - Related issue: #1167 Host list should display the disk error message on table Category: Storage Verification Steps Shutdown existing node vm machine Run &ldquo;qemu-img create&rdquo; command to make a nvme.img Edit quem/kvm xml setting to attach the nvme image Start VM Open hostpage and edit your target node config Add the new nvme disk Shutdown VM Remove the attach device setting in VM xml file Start VM Open Host page, the targe node will show warning with unready and unscheduable disk exists Expected Results If host encounter disk ready or schedule failure, on host page the &ldquo;disk state&rdquo; will show warning With a hover tip &ldquo;Host have unready or unschedulable disks&rdquo; Can create load balancer correctly with health check setting + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1167">#1167</a> Host list should display the disk error message on table</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Shutdown existing node vm machine</li> <li>Run &ldquo;qemu-img create&rdquo; command to make a nvme.img</li> <li>Edit quem/kvm xml setting to attach the nvme image</li> <li>Start VM</li> <li>Open hostpage and edit your target node config</li> <li>Add the new nvme disk</li> <li>Shutdown VM</li> <li>Remove the attach device setting in VM xml file</li> <li>Start VM</li> <li>Open Host page, the targe node will show warning with unready and unscheduable disk exists</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>If host encounter disk ready or schedule failure, on host page the &ldquo;disk state&rdquo; will show <strong>warning</strong> With a hover tip &ldquo;<strong>Host have unready or unschedulable disks&rdquo;</strong></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/138687164-877422a0-d33b-4e26-9c0b-d52b8f4e6995.png" alt="image"></p> Maintenance mode for host with multiple VMs https://harvester.github.io/tests/manual/hosts/maintenance-mode-multiple-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-multiple-vm/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Do health check on VMs Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Do health check on VMs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> </ol> Maintenance mode for host with one VM (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-one-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-one-vm/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Do health check on VMs Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Do health check on VMs</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> </ol> Maintenance mode on node with no vms (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-no-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-no-vm/ - Put host in maintenance mode Wait for host to go from entering maintenance mode to maintenance mode. Expected Results Host should start to go into maintenance mode Host should go into maintenance mode + <ol> <li>Put host in maintenance mode</li> <li>Wait for host to go from entering maintenance mode to maintenance mode.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Host should go into maintenance mode</li> </ol> Migrate back VMs that were on host after taking host out of maintenance mode https://harvester.github.io/tests/manual/hosts/q-maintenance-mode-migrate-back-vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/q-maintenance-mode-migrate-back-vms/ - Prerequisite: Have a Harvester cluster with at least 2 nodes setup. Test Steps: Given Create a vm with node selector lets say node-1. And Create a vm without node selector on node-1. AND Write some data into both the VMs. When Put the host node-1 into maintenance mode. Then All the Vms on node-1 should be migrated to other nodes or the node should show warning that the vm with node selector can&rsquo;t migrate. + <h3 id="prerequisite">Prerequisite:</h3> <p>Have a Harvester cluster with at least 2 nodes setup.</p> <h3 id="test-steps">Test Steps:</h3> <p><strong>Given</strong> Create a vm with node selector lets say node-1.</p> <p><strong>And</strong> Create a vm without node selector on node-1.</p> <p><strong>AND</strong> Write some data into both the VMs.</p> <p><strong>When</strong> Put the host node-1 into maintenance mode.</p> <p><strong>Then</strong> All the Vms on node-1 should be migrated to other nodes or the node should show warning that the vm with node selector can&rsquo;t migrate.</p> Move Longhorn storage to another partition https://harvester.github.io/tests/manual/hosts/move-longhorn-storage-to-another-partition/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/move-longhorn-storage-to-another-partition/ - Related issue: #1316 Move Longhorn storage to another partition Category: Storage Test Scenarios Case 1: UEFI + GPT (Disk &lt; MBR Limit) Case 2: BIOS + No MBR (Disk &lt; MBR Limit) Case 3: BIOS + Force MBR (Disk &lt; MBR Limit) Case 4: BIOS + No MBR (Disk &gt; MBR Limit) Case 5: BIOS + Force MBR (Disk &gt; MBR Limit) Case 6: UEFI + GPT (Disk &gt; MBR Limit) Environment setup Test Environment: 1 node harvester on local kvm machine + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1316">#1316</a> Move Longhorn storage to another partition</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="test-scenarios">Test Scenarios</h2> <p><img src="https://user-images.githubusercontent.com/29251855/148171176-5dfe439b-8f61-484b-8c16-9c0236a5c1f2.png" alt="image"></p> <ul> <li>Case 1: UEFI + GPT (Disk &lt; MBR Limit)</li> <li>Case 2: BIOS + No MBR (Disk &lt; MBR Limit)</li> <li>Case 3: BIOS + Force MBR (Disk &lt; MBR Limit)</li> <li>Case 4: BIOS + No MBR (Disk &gt; MBR Limit)</li> <li>Case 5: BIOS + Force MBR (Disk &gt; MBR Limit)</li> <li>Case 6: UEFI + GPT (Disk &gt; MBR Limit)</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ul> <li> <p>Test Environment: 1 node harvester on local kvm machine</p> Node Labeling for VM scheduling https://harvester.github.io/tests/manual/hosts/node_labeling/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/node_labeling/ - Ref: https://github.com/harvester/harvester/issues/1416 Verify Items Host labels can be assigned during installation via config-create / config-join YAML. Host labels can be managed post installation via the Harvester UI. Host label information can be accessed in Rancher Virtualization Management UI. Case: Label node when installing Install Harvester with config file and os.labels option Navigate to Host details then navigate to Labels in Config Check additional labels should be displayed Case: Label node after installed Install Harvester with at least 2 nodes Navigate to Host details then navigate to Labels in Config Use edit config to modify labels Reboot the Node and wait until its state become active Navigate to Host details then Navigate to Labels in Config Check modified labels should be displayed Case: Node&rsquo;s Label availability Install Harvester with at least 2 nodes Navigate to Host details then navigate to Labels in Config Use edit config to modify labels Reboot the Node and wait until its state become active Navigate to Host details then Navigate to Labels in Config Check modified labels should be displayed Install Rancher with any nodes Navigate to Virtualization Management and import former created Harvester Wait Until state become Active Click Name field to visit dashboard repeat step 2-7, and both compare from Harvester&rsquo;s dashboard (accessing via Harvester&rsquo;s VIP) + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1416">https://github.com/harvester/harvester/issues/1416</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Host labels can be assigned during installation via config-create / config-join YAML.</li> <li>Host labels can be managed post installation via the Harvester UI.</li> <li>Host label information can be accessed in Rancher Virtualization Management UI.</li> </ul> <h2 id="case-label-node-when-installing">Case: Label node when installing</h2> <ol> <li>Install Harvester with config file and <a href="https://docs.harvesterhci.io/v1.0/install/harvester-configuration/#oslabels"><strong>os.labels</strong></a> option</li> <li>Navigate to Host details then navigate to Labels in Config</li> <li>Check additional labels should be displayed</li> </ol> <h2 id="case-label-node-after-installed">Case: Label node after installed</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Host details then navigate to <strong>Labels</strong> in Config</li> <li>Use <strong>edit config</strong> to modify labels</li> <li>Reboot the Node and wait until its state become active</li> <li>Navigate to Host details then Navigate to Labels in Config</li> <li>Check modified labels should be displayed</li> </ol> <h2 id="case-nodes-label-availability">Case: Node&rsquo;s Label availability</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Navigate to Host details then navigate to <strong>Labels</strong> in Config</li> <li>Use <strong>edit config</strong> to modify labels</li> <li>Reboot the Node and wait until its state become active</li> <li>Navigate to Host details then Navigate to Labels in Config</li> <li>Check modified labels should be displayed</li> <li>Install Rancher with any nodes</li> <li>Navigate to <em>Virtualization Management</em> and import former created Harvester</li> <li>Wait Until state become <strong>Active</strong></li> <li>Click <em>Name</em> field to visit dashboard</li> <li>repeat step 2-7, and both compare from Harvester&rsquo;s dashboard (accessing via Harvester&rsquo;s VIP)</li> </ol> Nodes with cordoned status should not be in VM migration list https://harvester.github.io/tests/manual/hosts/nodes-with-cordoned-status-should-not-be-in-vm-migration-list/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/nodes-with-cordoned-status-should-not-be-in-vm-migration-list/ - Related issues: #1501 Nodes with cordoned status should not be in the selection list for VM migration Category: Host Verification Steps Create multiple VMs on two of the nodes Set the idle node to cordoned state Edit any config of VM, click migrate Check the available node in the migration list Expected Results Node set in cordoned state will not show up in the available migration list + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1501">#1501</a> Nodes with cordoned status should not be in the selection list for VM migration</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create multiple VMs on two of the nodes</li> <li>Set the idle node to cordoned state</li> <li>Edit any config of VM, click migrate</li> <li>Check the available node in the migration list</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Node set in cordoned state will not show up in the available migration list</p> Power down and power up the node https://harvester.github.io/tests/manual/hosts/negative-power-down-power-up-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-power-down-power-up-node/ - Create two vms on a cluster. Power down the node. Try to migrate a VM from the down node to active node. Leave the 2nd vm as it is. Power on the node Expected Results The 1st VM should be migrated to other node on manually doing it. The 2nd VM should be accessible once the node is up. Known bugs https://github.com/harvester/harvester/issues/982 + <ol> <li>Create two vms on a cluster.</li> <li>Power down the node.</li> <li>Try to migrate a VM from the down node to active node.</li> <li>Leave the 2nd vm as it is.</li> <li>Power on the node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The 1st VM should be migrated to other node on manually doing it.</li> <li>The 2nd VM should be accessible once the node is up.</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/982">https://github.com/harvester/harvester/issues/982</a></p> Power down the node https://harvester.github.io/tests/manual/hosts/negative-power-down-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-power-down-node/ - Create two vms on a cluster. Power down the node. Try to migrate a VM from the down node to active node. Leave the 2nd vm as it is. Expected Results The 1st VM should be migrated to other node on manually doing it. The 2nd VM should be recovered from the lost node + <ol> <li>Create two vms on a cluster.</li> <li>Power down the node.</li> <li>Try to migrate a VM from the down node to active node.</li> <li>Leave the 2nd vm as it is.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The 1st VM should be migrated to other node on manually doing it.</li> <li>The 2nd VM should be recovered from the lost node</li> </ol> Power node triggers VM reschedule https://harvester.github.io/tests/manual/hosts/vm_rescheduled_after_host_poweroff/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/vm_rescheduled_after_host_poweroff/ - Ref: N/A, legacy test case, VM is not migrated but rescheduled Criteria VM should created and started successfully Node should be unavailable after shutdown VM should be restarted automatically Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Power off the node hosting vm1 the node should becomes unavailable on dashboard VM vm1 should be restarted automatically after vm-force-reset-policy seconds + <p>Ref: N/A, legacy test case, VM is not migrated but rescheduled</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable after shutdown</li> <li><input checked="" disabled="" type="checkbox"> VM should be restarted automatically</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Power off the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li>VM <code>vm1</code> should be restarted automatically after <code>vm-force-reset-policy</code> seconds</li> </ol> PXE instll without iso_url field https://harvester.github.io/tests/manual/hosts/1439-pxe-install-without-iso-url-field/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1439-pxe-install-without-iso-url-field/ - Related issues: #1439 PXE boot installation doesn&rsquo;t give an error if iso_url field is missing Environment setup This is easiest to test with the vagrant setup at https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester edit https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27 to be blank Verification Steps Run the vagrant ./setup.sh from the vagrant repo Expected Results You should get an error in the console for the VM when installing + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1439">#1439</a> PXE boot installation doesn&rsquo;t give an error if iso_url field is missing</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This is easiest to test with the vagrant setup at <a href="https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester">https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester</a></p> <ol> <li>edit <a href="https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27">https://github.com/harvester/ipxe-examples/blob/main/vagrant-pxe-harvester/ansible/roles/harvester/templates/config-create.yaml.j2#L27</a> to be blank</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Run the vagrant <code>./setup.sh</code> from the vagrant repo</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error in the console for the VM when installing</li> </ol> Reboot a cluster and check VIP https://harvester.github.io/tests/manual/hosts/1669-reboot-cluster-check-vip/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1669-reboot-cluster-check-vip/ - Related issues: #1669 Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent) Verification Steps Enable VLAN with NIC harvester-mgmt Create VLAN 1 Disable VLAN Enable VLAN again shutdown node 3, 2, 1 server machine Wait for 15 minutes Power on node 1 server machine, wait for 20 seconds Power on node 2 server machine, wait for 20 seconds Power on node 3 server machine Check if you can access VIP and each node IP Expected Results VIP should load the page and show on every node in the terminal + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1669">#1669</a> Unable to access harvester VIP nor node IP after reboot or fully power cycle node machines (Intermittent)</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable VLAN with NIC harvester-mgmt</li> <li>Create VLAN 1</li> <li>Disable VLAN</li> <li>Enable VLAN again</li> <li>shutdown node 3, 2, 1 server machine</li> <li>Wait for 15 minutes</li> <li>Power on node 1 server machine, wait for 20 seconds</li> <li>Power on node 2 server machine, wait for 20 seconds</li> <li>Power on node 3 server machine</li> <li>Check if you can access VIP and each node IP</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VIP should load the page and show on every node in the terminal</li> </ol> Reboot host that is in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-reboot-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-reboot-host/ - For Host that is in maintenance mode and turned on Reboot host Expected Results Host should reboot Maintenance mode label in hosts list should go from yellow to red to yellow Known Bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>For Host that is in maintenance mode and turned on</li> <li>Reboot host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should reboot</li> <li>Maintenance mode label in hosts list should go from yellow to red to yellow</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> Reboot host trigger VM migration https://harvester.github.io/tests/manual/hosts/vm_migrated_after_host_reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/vm_migrated_after_host_reboot/ - Ref: N/A, legacy test case Criteria VM should created and started successfully Node should be unavailable while rebooting VM should be migrated to ohter node Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Reboot the node hosting vm1 the node should becomes unavailable on dashboard vm1 should be automatically migrated to another node + <p>Ref: N/A, legacy test case</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable while rebooting</li> <li><input checked="" disabled="" type="checkbox"> VM should be migrated to ohter node</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Reboot the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li><code>vm1</code> should be automatically migrated to another node</li> </ol> Reboot node https://harvester.github.io/tests/manual/hosts/negative-reboot-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-reboot-node/ - Create a vm on the cluster. Reboot the node where the vm exists. Reboot the node where there is no vm Expected Results On rebooting the node, once the node is back up and Harvester is started, the host should become available on the cluster. + <ol> <li>Create a vm on the cluster.</li> <li>Reboot the node where the vm exists.</li> <li>Reboot the node where there is no vm</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>On rebooting the node, once the node is back up and Harvester is started, the host should become available on the cluster.</li> </ol> Recover cordon and maintenace node after harvester node machine reboot https://harvester.github.io/tests/manual/hosts/recover-cordon-or-maintenace-node-after-harvester-node-machine-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/recover-cordon-or-maintenace-node-after-harvester-node-machine-reboot/ - Related issues: #1493 When hosts are stuck in maintenance mode and the cluster is unstable you can&rsquo;t access the UI Category: Host Verification Steps Create 3 virtual machine on 3 harvester nodes Cordon 1st and 2nd node, Enable maintenance mode on 1st and 2nd node We can&rsquo;t cordon and enable maintenance node on the remaining node Reboot 1st and 2nd node bare machine Wait for harvester machine back to service Login dashboard Disable maintenance mode on 1st and 2nd node Expected Results Cordon node and enter maintenance mode, after machine reboot, user can login harvester dashboard. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1493">#1493</a> When hosts are stuck in maintenance mode and the cluster is unstable you can&rsquo;t access the UI</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create 3 virtual machine on 3 harvester nodes</li> <li>Cordon 1st and 2nd node, <img src="https://user-images.githubusercontent.com/29251855/141106858-cdfb35f3-50af-48d0-b776-1f1cc5dfcedc.png" alt="image"></li> <li>Enable maintenance mode on 1st and 2nd node <img src="https://user-images.githubusercontent.com/29251855/141106968-e4d7a6be-6c60-4771-aabd-8df0ccafe252.png" alt="image"></li> <li>We can&rsquo;t cordon and enable maintenance node on the remaining node <img src="https://user-images.githubusercontent.com/29251855/141107044-774166b8-117e-4635-b8a2-eeedb65e48fc.png" alt="image"></li> <li>Reboot 1st and 2nd node bare machine</li> <li>Wait for harvester machine back to service</li> <li>Login dashboard</li> <li>Disable maintenance mode on 1st and 2nd node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Cordon node and enter maintenance mode, after machine reboot, user can login harvester dashboard.</li> <li>Node remain it&rsquo;s original status</li> <li>Can disable/uncordon node, it can back to original status <img src="https://user-images.githubusercontent.com/29251855/141111698-64d9d648-9018-4c14-8828-539f6e44361e.png" alt="image"></li> </ol> Remove a management node from a 3 nodes cluster and add it back to the cluster by reinstalling it https://harvester.github.io/tests/manual/hosts/remove-management-node-then-reinstall/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/remove-management-node-then-reinstall/ - From a HA cluster with 3 nodes Delete one of the nodes after the node promotion(all 3 nodes are management nodes) Reinstall the removed node with the same node name and IP The rejoined node will be promoted to master automatically Expected Results The removed node should be able to rejoin the cluster without issues Comments Purpose is to cover this scenario: https://github.com/harvester/harvester/issues/1040 Check the job promotion with the command kubectl get jobs -n harvester-system If a node is stuck in the removing status, you likely face to this issue, execute this command as workaround: kubectl get node -o name &lt;nodename&gt; | xargs -i kubectl patch {} -p '{&quot;metadata&quot;:{&quot;finalizers&quot;:[]}}' --type=merge + <ol> <li>From a HA cluster with 3 nodes</li> <li>Delete one of the nodes after the node promotion(all 3 nodes are management nodes)</li> <li>Reinstall the removed node with the same node name and IP</li> <li>The rejoined node will be promoted to master automatically</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The removed node should be able to rejoin the cluster without issues</li> </ol> <h3 id="comments">Comments</h3> <ul> <li>Purpose is to cover this scenario: <a href="https://github.com/harvester/harvester/issues/1040">https://github.com/harvester/harvester/issues/1040</a></li> <li>Check the job promotion with the command kubectl get jobs -n harvester-system</li> <li>If a node is stuck in the removing status, you likely face to this issue, execute this command as workaround: <code>kubectl get node -o name &lt;nodename&gt; | xargs -i kubectl patch {} -p '{&quot;metadata&quot;:{&quot;finalizers&quot;:[]}}' --type=merge</code></li> </ul> Remove unavailable node with VMs on it https://harvester.github.io/tests/manual/hosts/negative-remove-unavailable-node-with-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-remove-unavailable-node-with-vm/ - Create VMs on a host. Turn off Host Remove Host from hosts list Expected Results VMs should migrate to new host Known Bugs https://github.com/harvester/harvester/issues/983 + <ol> <li>Create VMs on a host.</li> <li>Turn off Host</li> <li>Remove Host from hosts list</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VMs should migrate to new host</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/983">https://github.com/harvester/harvester/issues/983</a></p> Set maintenance mode on the last available node shouldn't be allowed https://harvester.github.io/tests/manual/hosts/set-maintenance-mode-on-the-last-available-node-shouldnt-be-allowed/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/set-maintenance-mode-on-the-last-available-node-shouldnt-be-allowed/ - Related issues: #1014 Trying to set maintenance mode on the last available node shouldn&rsquo;t be allowed Category: Host Verification Steps Create 3 vms located on node2 and node3 Open host page Set node 3 into maintenance mode Wait for virtual machine migrate to node 2 Set node 2 into maintenance mode wait for virtual machine migrate to node 1 Set node 2 into maintenance mode Expected Results Within 3 nodes and 3 virtual machines testing environment. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1014">#1014</a> Trying to set maintenance mode on the last available node shouldn&rsquo;t be allowed</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create 3 vms located on node2 and node3 <img src="https://user-images.githubusercontent.com/29251855/140375836-50cfdb48-a37f-4d86-b931-04983e837cdc.png" alt="image"></p> </li> <li> <p>Open host page</p> </li> <li> <p>Set node 3 into maintenance mode</p> </li> <li> <p>Wait for virtual machine migrate to node 2</p> </li> <li> <p>Set node 2 into maintenance mode</p> Shut down host in maintenance mode and verify label change https://harvester.github.io/tests/manual/hosts/1272-shutdown-host-in-maintenance-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1272-shutdown-host-in-maintenance-mode/ - Related issues: #1272 Shut down a node with maintenance mode should show red label Verification Steps Open host page Set a node to maintenance mode Turn off host vm of the node Check node status Turn on host Check node status Expected Results The node should go into maintenance mode The node label should go red When turned on the node status should go back to yellow + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1272">#1272</a> Shut down a node with maintenance mode should show red label</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open host page</li> <li>Set a node to maintenance mode</li> <li>Turn off host vm of the node</li> <li>Check node status</li> <li>Turn on host</li> <li>Check node status</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The node should go into maintenance mode</li> <li>The node label should go red</li> <li>When turned on the node status should go back to yellow</li> </ol> Shut down host then delete hosted VM https://harvester.github.io/tests/manual/hosts/delete_vm_after_host_shutdown/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/delete_vm_after_host_shutdown/ - Ref: N/A, legacy test case Criteria VM should created and started successfully Node should be unavailable after shutdown VM should able to be deleted Verify Steps: Install Harvester with at least 2 nodes Create a image for VM creation Create a VM vm1 and start it vm1 should started successfully Power off the node hosting vm1 the node should becomes unavailable on dashboard Delete vm1, vm1 should be deleted successfully + <p>Ref: N/A, legacy test case</p> <h3 id="criteria">Criteria</h3> <ul> <li><input checked="" disabled="" type="checkbox"> VM should created and started successfully</li> <li><input checked="" disabled="" type="checkbox"> Node should be unavailable after shutdown</li> <li><input checked="" disabled="" type="checkbox"> VM should able to be deleted</li> </ul> <h2 id="verify-steps">Verify Steps:</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Create a image for VM creation</li> <li>Create a VM <code>vm1</code> and start it</li> <li><code>vm1</code> should started successfully</li> <li>Power off the node hosting <code>vm1</code></li> <li>the node should becomes unavailable on dashboard</li> <li>Delete <code>vm1</code>, <code>vm1</code> should be deleted successfully</li> </ol> Start Host in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-start-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-start-host/ - For Host that is in maintenance mode and turned off Start host Expected Results Host should turn on Maintenance mode label in hosts list should go from red to yellow Known bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>For Host that is in maintenance mode and turned off</li> <li>Start host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should turn on</li> <li>Maintenance mode label in hosts list should go from red to yellow</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> Take host out of maintenance mode that has been rebooted (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-rebooted/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-rebooted/ - For host in maintenance mode that has been rebooted take host out of maintenance mode Expected Results Host should go to Active Label shbould go green + <ol> <li>For host in maintenance mode that has been rebooted take host out of maintenance mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should go to Active</li> <li>Label shbould go green</li> </ol> Take host out of maintenance mode that has not been rebooted (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-not-rebooted/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-enable-host-not-rebooted/ - For host in maintenance mode that has not been rebooted take host out of maintenance mode Expected Results Host should go to Active Label shbould go green + <ol> <li>For host in maintenance mode that has not been rebooted take host out of maintenance mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should go to Active</li> <li>Label shbould go green</li> </ol> Temporary network disruption https://harvester.github.io/tests/manual/hosts/negative-network-disruption/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-network-disruption/ - Create a vms on the cluster. Disable network of a node for sometime. e.g. 5 sec, 5 mins Expected Results VM should be accessible after the network is up. + <ol> <li>Create a vms on the cluster.</li> <li>Disable network of a node for sometime. e.g. 5 sec, 5 mins</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be accessible after the network is up.</li> </ol> Test NTP server timesync https://harvester.github.io/tests/manual/hosts/1535-test-ntp-timesync/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/1535-test-ntp-timesync/ - Related issues: #1535 NTP daemon in host OS Environment setup This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install Verification Steps SSH into nodes and verify times are close Verify NTP is active with sudo timedatectl status Expected Results Times should be within a minute of each other NTP should show as active + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1535">#1535</a> NTP daemon in host OS</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH into nodes and verify times are close</li> <li>Verify NTP is active with <code>sudo timedatectl status</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Times should be within a minute of each other</li> <li>NTP should show as active</li> </ol> Turn off host that is in maintenance mode (e2e_be) https://harvester.github.io/tests/manual/hosts/maintenance-mode-turn-off-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/maintenance-mode-turn-off-host/ - Put host in maintenance mode Migrate VMs Wait for VMs to migrate Wait for any vms to migrate off Shut down Host Expected Results Host should start to go into maintenance mode Any VMs should migrate off Host should go into maintenance mode host should shut down Maintenance mode label in hosts list should go red Known bugs https://github.com/harvester/harvester/issues/1272 + <ol> <li>Put host in maintenance mode</li> <li>Migrate VMs</li> <li>Wait for VMs to migrate</li> <li>Wait for any vms to migrate off</li> <li>Shut down Host</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Host should start to go into maintenance mode</li> <li>Any VMs should migrate off</li> <li>Host should go into maintenance mode</li> <li>host should shut down</li> <li>Maintenance mode label in hosts list should go red</li> </ol> <h3 id="known-bugs">Known bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1272">https://github.com/harvester/harvester/issues/1272</a></p> Verify Enabling maintenance mode https://harvester.github.io/tests/manual/hosts/verify-enabling-maintenance-mode/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-enabling-maintenance-mode/ - Navigate to the Hosts page and select the node Click Maintenance Mode Expected Results The existing VM should get migrated to other nodes. Verify the CRDs to see the maintenance mode is enabled. Comments Needs other test cases to be added If VM migration fails How does live migration work What happens if there are no schedulable resources on nodes Check CRDs on hosts On going into maintenance mode kubectl get virtualmachines &ndash;all-namespaces Kubectl get virtualmachines/name -o yaml On coming out of maintenance mode kubectl get virtualmachines &ndash;all-namespaces Kubectl get virtualmachines/name -o yaml Check that maintenance mode host isn&rsquo;t schedulable Fully provision all nodes and try to create a VM It should fail Migration with maintenance mode What if migration gets stuck, can you cancel VMs going to different hosts Canceling maintenance mode P1 Put in maintenance mode Check migration of VMs Check status of VMs modify filesystem on VMs Check status of host Take host out of maintenance mode Check status of host Migrate VMs back to host Check filesystem Create new VMs on host Check status of VMs + <ol> <li>Navigate to the Hosts page and select the node</li> <li>Click Maintenance Mode</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The existing VM should get migrated to other nodes.</li> <li>Verify the CRDs to see the maintenance mode is enabled.</li> </ol> <h3 id="comments">Comments</h3> <ol> <li>Needs other test cases to be added</li> <li>If VM migration fails</li> <li>How does live migration work</li> <li>What happens if there are no schedulable resources on nodes <ul> <li>Check CRDs on hosts <ul> <li>On going into maintenance mode</li> <li>kubectl get virtualmachines &ndash;all-namespaces</li> </ul> </li> <li>Kubectl get virtualmachines/name -o yaml <ul> <li>On coming out of maintenance mode</li> <li>kubectl get virtualmachines &ndash;all-namespaces</li> </ul> </li> </ul> </li> <li>Kubectl get virtualmachines/name -o yaml <ul> <li>Check that maintenance mode host isn&rsquo;t schedulable <ul> <li>Fully provision all nodes and try to create a VM</li> </ul> </li> </ul> </li> <li>It should fail <ul> <li>Migration with maintenance mode</li> <li>What if migration gets stuck, can you cancel</li> <li>VMs going to different hosts</li> <li>Canceling maintenance mode</li> <li>P1 <ul> <li>Put in maintenance mode</li> <li>Check migration of VMs</li> <li>Check status of VMs</li> <li>modify filesystem on VMs</li> <li>Check status of host</li> <li>Take host out of maintenance mode</li> <li>Check status of host</li> <li>Migrate VMs back to host</li> <li>Check filesystem</li> <li>Create new VMs on host</li> <li>Check status of VMs</li> </ul> </li> </ul> </li> </ol> Verify the Filter on the Host page https://harvester.github.io/tests/manual/hosts/verify-filter-on-host-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-filter-on-host-page/ - Enter name of a host and verify the nodes get filtered out. Expected Results The edited name should be reflected on the host. + <ol> <li>Enter name of a host and verify the nodes get filtered out.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The edited name should be reflected on the host.</li> </ol> Verify the info of the node https://harvester.github.io/tests/manual/hosts/verify-node-info/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/verify-node-info/ - Navigate to the hosts tab and verify the following. State Name Host IP CPU Memory Storage Size Age Expected Results All the data/status shown on the page should be correct. + <ol> <li>Navigate to the hosts tab and verify the following. <ul> <li>State</li> <li>Name</li> <li>Host IP</li> <li>CPU</li> <li>Memory</li> <li>Storage Size</li> <li>Age</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All the data/status shown on the page should be correct.</li> </ol> Verify the state for Powered down node https://harvester.github.io/tests/manual/hosts/negative-verify-state-powered-down-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/hosts/negative-verify-state-powered-down-node/ - Power down the node and check the state of the node in the cluster Expected Results The node state should show unavilable + <ol> <li>Power down the node and check the state of the node in the cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The node state should show unavilable</li> </ol> diff --git a/manual/images/index.xml b/manual/images/index.xml index 204a762da..b30d4d7ab 100644 --- a/manual/images/index.xml +++ b/manual/images/index.xml @@ -12,77 +12,77 @@ https://harvester.github.io/tests/manual/images/add-labels/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/add-labels/ - Add multiple labels to the images. Click save Expected Results Labels should be added successfully + <ol> <li>Add multiple labels to the images.</li> <li>Click save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Labels should be added successfully</li> </ol> Create Images with valid image URL (e2e_be_fe) https://harvester.github.io/tests/manual/images/create-images-with-valid-image-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/create-images-with-valid-image-url/ - Create image with cloud image available for openSUSE. http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Expected Results Image should show state as Active. Check the backing image in Longhorn Known Bugs https://github.com/harvester/harvester/issues/1269 + <ol> <li>Create image with cloud image available for openSUSE. <a href="http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2">http://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should show state as Active.</li> <li>Check the backing image in Longhorn</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1269">https://github.com/harvester/harvester/issues/1269</a></p> Create with invalid image (e2e_be_fe) https://harvester.github.io/tests/manual/images/negative-create-with-invalid-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/negative-create-with-invalid-image/ - Create image with invalid URL. e.g. - https://test.img Expected Results Image state show as Failed + <ol> <li>Create image with invalid URL. e.g. - <a href="https://test.img">https://test.img</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image state show as Failed</li> </ol> Delete the image (e2e_be_fe) https://harvester.github.io/tests/manual/images/delete-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/delete-image/ - Select an image with state active. Delete the image. Create another image with same name. Delete the newly created image. Delete an image with failed state Expected Results The image should be deleted successfully. Check the CRDS VirtualMachineImage. User should be able to create a new image with same name. Check the backing image in Longhorn. + <ol> <li>Select an image with state active.</li> <li>Delete the image.</li> <li>Create another image with same name.</li> <li>Delete the newly created image.</li> <li>Delete an image with failed state</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The image should be deleted successfully. Check the CRDS VirtualMachineImage.</li> <li>User should be able to create a new image with same name.</li> <li>Check the backing image in Longhorn.</li> </ol> Delete VM with exported image(e2e_fe) https://harvester.github.io/tests/manual/images/1602-delete-vm-with-exported-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/1602-delete-vm-with-exported-image/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; delete image &ldquo;img-1&rdquo; Expected Results image &ldquo;img-1&rdquo; will be deleted + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>delete image &ldquo;img-1&rdquo;</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be deleted</li> </ol> Edit images (e2e_be_fe) https://harvester.github.io/tests/manual/images/edit-images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/edit-images/ - Edit image. Try to edit the description Try to edit the URL Try to edit the Labels Expected Results User should be able to edit the description and Labels User should not be able to edit the URL + <ol> <li>Edit image. <ul> <li>Try to edit the description</li> <li>Try to edit the URL</li> <li>Try to edit the Labels</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User should be able to edit the description and Labels</li> <li>User should not be able to edit the URL</li> </ol> Update image labels after deleting source VM(e2e_fe) https://harvester.github.io/tests/manual/images/1602-update-labels-on-image-after-vm-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/1602-update-labels-on-image-after-vm-delete/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; update image &ldquo;img-1&rdquo; labels Expected Results image &ldquo;img-1&rdquo; will be updated + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>update image &ldquo;img-1&rdquo; labels</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be updated</li> </ol> Upload Cloud Image (e2e_be) https://harvester.github.io/tests/manual/images/upload-cloud-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/upload-cloud-image/ - Upload image to images page Create new vm with image using appropriate template Run VM health checks Expected Results Image should upload Health checks should pass + <ol> <li>Upload image to images page</li> <li>Create new vm with image using appropriate template</li> <li>Run VM health checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should upload</li> <li>Health checks should pass</li> </ol> Upload image that is invalid https://harvester.github.io/tests/manual/images/negative-upload-invalid-image-file/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/negative-upload-invalid-image-file/ - steTry to upload invalid image file to images page Something like dmg, or tar.gzps Expected Results You should get an error Known Bugs https://github.com/harvester/harvester/issues/1425 + <ol> <li>steTry to upload invalid image file to images page <ul> <li>Something like dmg, or tar.gzps</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error</li> </ol> <h3 id="known-bugs">Known Bugs</h3> <p><a href="https://github.com/harvester/harvester/issues/1425">https://github.com/harvester/harvester/issues/1425</a></p> Upload ISO Image(e2e_fe) https://harvester.github.io/tests/manual/images/upload-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/upload-iso-image/ - Upload image to images page Create new vm with image using appropriate template Run VM health checks Expected Results Image should upload Health checks should pass + <ol> <li>Upload image to images page</li> <li>Create new vm with image using appropriate template</li> <li>Run VM health checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Image should upload</li> <li>Health checks should pass</li> </ol> Verify the options available for image https://harvester.github.io/tests/manual/images/verify-options-available-for-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/images/verify-options-available-for-image/ - Create vm with YAML using the menu option. Download Yaml Verify the downloaded Yaml file. Clone the Image Expected Results All user-specified fields must match what show on GUI: Namespace Name Description URL Labels + <ol> <li>Create vm with YAML using the menu option.</li> <li>Download Yaml</li> <li>Verify the downloaded Yaml file.</li> <li>Clone the Image</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All user-specified fields must match what show on GUI: <ul> <li>Namespace</li> <li>Name</li> <li>Description</li> <li>URL</li> <li>Labels</li> </ul> </li> </ol> diff --git a/manual/live-migration/index.xml b/manual/live-migration/index.xml index 140fb1f99..5ae5c01ac 100644 --- a/manual/live-migration/index.xml +++ b/manual/live-migration/index.xml @@ -12,161 +12,161 @@ https://harvester.github.io/tests/manual/live-migration/1401-support-volume-hot-unplug-live-migrate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/1401-support-volume-hot-unplug-live-migrate/ - Related issues: #1401 Http proxy setting download image Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Verification Steps Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged. Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click Detach volume Add volume again Migrate VM from one node to another Detach volume Add unplugged volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h5 id="scenario2-live-migrate-vm-not-have-hot-plugged-volume-before-do-hot-plugged-the-unplugged">Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged.</h5> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click Detach volume</li> <li>Add volume again</li> <li>Migrate VM from one node to another</li> <li>Detach volume</li> <li>Add unplugged volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Initiate multiple migrations at one time https://harvester.github.io/tests/manual/live-migration/initiate-multple-migrations-same-time/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/initiate-multple-migrations-same-time/ - Initiate live migration for a vm. While the live migration is in progress, initiate another migration Expected Results Both migration should work fine. The VMs should be accessible after the migration + <ol> <li>Initiate live migration for a vm.</li> <li>While the live migration is in progress, initiate another migration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Both migration should work fine.</li> <li>The VMs should be accessible after the migration</li> </ol> Migrate a turned on VM from one host to another https://harvester.github.io/tests/manual/live-migration/migrate-turned-on-vm-to-another-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-turned-on-vm-to-another-host/ - Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM created with cloud init config data https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-cloud-init/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-cloud-init/ - Create a new VM with cloud init config data Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with cloud init config data</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM created with user data config https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-user-data/ - Create a new VM with a password specified by user data config Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with a password specified by user data config</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that has multiple volumes https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-volumes/ - Create a new VM with a root disk and a CDROM volume Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with a root disk and a CDROM volume</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that was created from a template https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-template/ - Create a new VM from a template Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM from a template</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM that was created using a restore backup to new VM https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-restore-to-new/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-created-from-restore-to-new/ - Take an existing backup Restore the backup to a new VM Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Take an existing backup</li> <li>Restore the backup to a new VM</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with 1 backup https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-one-backup/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-one-backup/ - Create a new VM Create a backup Add a new file to the home directory Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM</li> <li>Create a backup</li> <li>Add a new file to the home directory</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with a saved SSH Key https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-ssh/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-with-ssh/ - Create a new VM with an SSH key Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with an SSH key</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with multiple backups https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-backups/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-backups/ - Create a new VM Create a backup Add a new file to the home directory Create a new backup Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM</li> <li>Create a backup</li> <li>Add a new file to the home directory</li> <li>Create a new backup</li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate a VM with multiple networks https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-vm-multiple-networks/ - Create a new VM with one management network in masquerade mode one VLAN network Create a new file on the machine Migrate the VM from one host in the cluster to another Connect via console Check for the file Change the file and save it Verify that you can close and open the file again Expected Results File should create correctly VM should go into migrating status VM should go out of migrating status It should show the new node on the host column in the VM list It should have the same IP You should be able to edit and re-open the file + <ol> <li>Create a new VM with <ul> <li>one management network in masquerade mode</li> <li>one VLAN network</li> </ul> </li> <li>Create a new file on the machine</li> <li>Migrate the VM from one host in the cluster to another</li> <li>Connect via console</li> <li>Check for the file</li> <li>Change the file and save it</li> <li>Verify that you can close and open the file again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>File should create correctly</li> <li>VM should go into migrating status</li> <li>VM should go out of migrating status</li> <li>It should show the new node on the host column in the VM list</li> <li>It should have the same IP</li> <li>You should be able to edit and re-open the file</li> </ol> Migrate to Node without replicaset https://harvester.github.io/tests/manual/live-migration/migrate-to-node-without-replicaset/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/migrate-to-node-without-replicaset/ - Create a new VM on a 4 node cluster Check which nodes have copies of the replica set Migrate the VM to the host that does not have the volume Expected Results VM should create correctly + <ol> <li>Create a new VM on a 4 node cluster</li> <li>Check which nodes have copies of the replica set</li> <li>Migrate the VM to the host that does not have the volume</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create correctly</li> </ol> Migrate VM from Restored backup https://harvester.github.io/tests/manual/live-migration/restored_vm_migration/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/restored_vm_migration/ - Ref: https://github.com/harvester/harvester/issues/1086 Verify Items VM can be migrate to any node with any times Case: Migrate a restored VM Install Harvester with at least 2 nodes setup backup-target with NFS Create image for VM creation Create VM a Add file with some data in VM a Backup VM a as a-bak Restore backup a-bak into VM b Start VM b then check added file should exist with same content Migrate VM b to another node, then check added file should exist with same content Migrate VM b again, then check added file should exist with same content + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1086">https://github.com/harvester/harvester/issues/1086</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VM can be migrate to any node with any times</li> </ul> <h2 id="case-migrate-a-restored-vm">Case: Migrate a restored VM</h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>setup backup-target with NFS</li> <li>Create image for VM creation</li> <li>Create VM <strong>a</strong></li> <li>Add file with some data in VM <strong>a</strong></li> <li>Backup VM <strong>a</strong> as <strong>a-bak</strong></li> <li>Restore backup <strong>a-bak</strong> into VM <strong>b</strong></li> <li>Start VM <strong>b</strong> then check added file should exist with same content</li> <li>Migrate VM <strong>b</strong> to another node, then check added file should exist with same content</li> <li>Migrate VM <strong>b</strong> again, then check added file should exist with same content</li> </ol> Negative migrate a turned on VM from one host to another https://harvester.github.io/tests/manual/live-migration/negative-migrate-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-migrate-vm/ - Migrate the VM from one host in the cluster to another Turn off/disconnect node while migrating Expected Results Migration should fail You should get an error message in the status + <ol> <li>Migrate the VM from one host in the cluster to another</li> <li>Turn off/disconnect node while migrating</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail</li> <li>You should get an error message in the status</li> </ol> Negative network disconnection for a longer time while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-network-disconnect-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-network-disconnect-while-migrating/ - Initiate VM migration While migration is in progress, disconnect network for 100 sec on the node where the VM is scheduled Expected Results Migration should fail but volume data should be intact The VM should be accessible during the migration and should also be accessible once the migration fails + <ol> <li>Initiate VM migration</li> <li>While migration is in progress, disconnect network for 100 sec on the node where the VM is scheduled</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail but volume data should be intact</li> <li>The VM should be accessible during the migration and should also be accessible once the migration fails</li> </ol> Negative network disconnection for a short time while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-network-disruption-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-network-disruption-while-migrating/ - Initiate VM migration. While migration is in progress, disconnect network for 5 sec on the node where the VM is scheduled Expected Results Migration should resume once the network is up again The VM should be accessible during and after the migration + <ol> <li>Initiate VM migration.</li> <li>While migration is in progress, disconnect network for 5 sec on the node where the VM is scheduled</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should resume once the network is up again</li> <li>The VM should be accessible during and after the migration</li> </ol> Negative node down while migration is in progress https://harvester.github.io/tests/manual/live-migration/negative-node-down-while-migrating/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-node-down-while-migrating/ - Initiate VM migration. While migration is in progress, shut the node where the VM is scheduled. After failure, initiate the migration to another node Expected Results Migration should fail but volume data should be intact The VM should be accessible on older node The migration scheduled for another node should work fine The VM should be accessible during and after the migration + <ol> <li>Initiate VM migration.</li> <li>While migration is in progress, shut the node where the VM is scheduled.</li> <li>After failure, initiate the migration to another node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should fail but volume data should be intact</li> <li>The VM should be accessible on older node</li> <li>The migration scheduled for another node should work fine</li> <li>The VM should be accessible during and after the migration</li> </ol> Negative node un-schedulable during live migration https://harvester.github.io/tests/manual/live-migration/negative-node-unschedulable/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/negative-node-unschedulable/ - Prerequisite: Cluster is of 3 nodes. VM is running on Node-1 Node-2 and Node-3 don&rsquo;t have space to migrate a VM to them. Steps: Create a vm on node-1 Migrate the VM. Expected Results Migration should not be started. Relevant error should be shown on the GUI. The existing VM should be accessible and the health check of the VM should be fine + <h2 id="prerequisite">Prerequisite:</h2> <ol> <li>Cluster is of 3 nodes.</li> <li>VM is running on Node-1</li> <li>Node-2 and Node-3 don&rsquo;t have space to migrate a VM to them.</li> </ol> <h2 id="steps">Steps:</h2> <ol> <li>Create a vm on node-1</li> <li>Migrate the VM.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Migration should not be started.</li> <li>Relevant error should be shown on the GUI.</li> <li>The existing VM should be accessible and the health check of the VM should be fine</li> </ol> Support volume hot plug live migrate https://harvester.github.io/tests/manual/live-migration/support-volume-hot-unplug-live-migrate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/support-volume-hot-unplug-live-migrate/ - Related issues: #1401 Support volume hot-unplug Category: Storage Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Verification Steps Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged. Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click Detach volume Add volume again Migrate VM from one node to another Detach volume Add unplugged volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Support volume hot-unplug</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <h5 id="scenario2-live-migrate-vm-not-have-hot-plugged-volume-before-do-hot-plugged-the-unplugged">Scenario2: Live migrate VM not have hot-plugged volume before, do hot-plugged the unplugged.</h5> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click Detach volume</li> <li>Add volume again</li> <li>Migrate VM from one node to another</li> <li>Detach volume</li> <li>Add unplugged volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Test aborting live migration https://harvester.github.io/tests/manual/live-migration/abort-live-migration/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/abort-live-migration/ - On a VM that is turned on select migrate Start the migration Abort the migration Expected Results You should see the status move to migrating You should see the status move to aborting migration You should see the status move to running The VM should pass health checks + <ol> <li>On a VM that is turned on select migrate</li> <li>Start the migration</li> <li>Abort the migration</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should see the status move to migrating</li> <li>You should see the status move to aborting migration</li> <li>You should see the status move to running</li> <li>The VM should pass health checks</li> </ol> Test zero downtime for live migration download test https://harvester.github.io/tests/manual/live-migration/zero-downtime-download-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/zero-downtime-download-test/ - Connect to VM via console Start a large file download Live migrate VM to new host Verify that file download does not fail Expected Results Console should open VM should start to migrate File download should + <ol> <li>Connect to VM via console</li> <li>Start a large file download</li> <li>Live migrate VM to new host</li> <li>Verify that file download does not fail</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Console should open</li> <li>VM should start to migrate</li> <li>File download should</li> </ol> Test zero downtime for live migration ping test https://harvester.github.io/tests/manual/live-migration/zero-downtime-ping-test/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/live-migration/zero-downtime-ping-test/ - Continually ping VM Verify that ping is getting a response Live migrate VM to new host Verify that ping continues Expected Results Ping should get response VM should start to migrate Ping should not get any dropped packets + <ol> <li>Continually ping VM</li> <li>Verify that ping is getting a response</li> <li>Live migrate VM to new host</li> <li>Verify that ping continues</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Ping should get response</li> <li>VM should start to migrate</li> <li>Ping should not get any dropped packets</li> </ol> diff --git a/manual/misc/index.xml b/manual/misc/index.xml index 62ae37052..a3e392709 100644 --- a/manual/misc/index.xml +++ b/manual/misc/index.xml @@ -12,49 +12,49 @@ https://harvester.github.io/tests/manual/misc/download_kubeconfig/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/download_kubeconfig/ - Ref: https://github.com/harvester/harvester/issues/1349 Verify Items Download KubeConfig should not exist in general views Download Kubeconfig should exist in Support page Downloaded file should be named with suffix .yaml Case: Download KubeConfig navigate to every pages to make sure download kubeconfig icon will not appear in header section navigate to support page to check Download KubeConfig is work normally + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1349">https://github.com/harvester/harvester/issues/1349</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Download KubeConfig should not exist in general views</li> <li>Download Kubeconfig should exist in Support page</li> <li>Downloaded file should be named with suffix <code>.yaml</code></li> </ul> <h2 id="case-download-kubeconfig">Case: Download KubeConfig</h2> <ul> <li>navigate to every pages to make sure download kubeconfig icon will not appear in header section</li> <li>navigate to support page to check <code>Download KubeConfig</code> is work normally</li> </ul> Check favicon and title on pages https://harvester.github.io/tests/manual/misc/1520-check-title-and-favicon/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1520-check-title-and-favicon/ - Related issues: #1520 incorrect title and favicon Verification Steps Log into Harvester Check page title and favicon on each of these pages dashboard main page settings support Volumes SSH Keys Host info Expected Results Harvester favicon and title should show on each page + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1520">#1520</a> incorrect title and favicon</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Log into Harvester</li> <li>Check page title and favicon on each of these pages <ul> <li>dashboard</li> <li>main page</li> <li>settings</li> <li>support</li> <li>Volumes</li> <li>SSH Keys</li> <li>Host info</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester favicon and title should show on each page</li> </ol> Check Harvester CloudInit CRDs within Harvester, Terraform & Rancher https://harvester.github.io/tests/manual/misc/3902-elemental-cloud-init-harvester-crds/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/3902-elemental-cloud-init-harvester-crds/ - Related issues: #3902 support elemental cloud-init via harvester-node-manager Testing With Terraform TBD Testing From Harvester UI TBD Testing From Rancher Fleet UI / Harvester Fleet Controller TBD Testing w/ Harvester Kubeconfig via Kubectl &amp; K9s (or similar tool) Pre-Reqs: Have an available multi-node Harvester cluster, w/out your ssh-key present on any nodes Provision cluster however is easiest K9s (or other similar kubectl tooling) kubectl audit elemental toolkit for an understanding of stages audit harvester configuration to correlate properties to elemental-toolkit based stages / functions Negative Tests: Validate Non-YAML Files Get . + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3902">#3902</a> support elemental cloud-init via harvester-node-manager</li> </ul> <h2 id="testing-with-terraform">Testing With Terraform</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-from-harvester-ui">Testing From Harvester UI</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-from-rancher-fleet-ui--harvester-fleet-controller">Testing From Rancher Fleet UI / Harvester Fleet Controller</h2> <ol> <li>TBD</li> </ol> <h2 id="testing-w-harvester-kubeconfig-via-kubectl--k9s-or-similar-tool">Testing w/ Harvester Kubeconfig via Kubectl &amp; K9s (or similar tool)</h2> <h3 id="pre-reqs">Pre-Reqs:</h3> <ul> <li>Have an available multi-node Harvester cluster, w/out your ssh-key present on any nodes</li> <li>Provision cluster however is easiest</li> <li>K9s (or other similar kubectl tooling)</li> <li>kubectl</li> <li>audit <a href="https://rancher.github.io/elemental-toolkit/docs/customizing/stages">elemental toolkit</a> for an understanding of stages</li> <li>audit <a href="https://docs.harvesterhci.io/v1.2/install/harvester-configuration">harvester configuration</a> to correlate properties to elemental-toolkit based stages / functions</li> </ul> <h3 id="negative-tests">Negative Tests:</h3> <h4 id="validate-non-yaml-files-get-yaml-as-suffix-on-file-system">Validate Non-YAML Files Get .yaml as Suffix On File-System</h4> <ol> <li>Prepare a YAML loadout of a CloudInit resource that takes the shape of:</li> </ol> <div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">node.harvesterhci.io/v1beta1</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">CloudInit</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">metadata</span>: </span></span><span style="display:flex;"><span> <span style="color:#f92672">name</span>: <span style="color:#ae81ff">write-file-with-non-yaml-filename</span> </span></span><span style="display:flex;"><span><span style="color:#f92672">spec</span>: </span></span><span style="display:flex;"><span> <span style="color:#f92672">matchSelector</span>: {} </span></span><span style="display:flex;"><span> <span style="color:#f92672">filename</span>: <span style="color:#ae81ff">99_filewrite.log</span> </span></span><span style="display:flex;"><span> <span style="color:#f92672">contents</span>: |<span style="color:#e6db74"> </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> stages: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> fs: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> - name: &#34;write file test&#34; </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> commands: </span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> - echo &#34;hello, there&#34; &gt; /etc/sillyfile.conf</span> </span></span></code></pr Create support bundle in multi-node Harvester cluster with one node off https://harvester.github.io/tests/manual/misc/1524-create-support-bundle-with-one-node-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1524-create-support-bundle-with-one-node-off/ - Related issues: #1524 Can&rsquo;t create support bundle if one node is off Verification Steps On a multi-node harvester cluster power off one node Navigate to support create support bundle Expected Results Support bundle should create and be downloaded YOu should be able to extract and examine support bundle + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1524">#1524</a> Can&rsquo;t create support bundle if one node is off</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>On a multi-node harvester cluster power off one node</li> <li>Navigate to support create support bundle</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Support bundle should create and be downloaded</li> <li>YOu should be able to extract and examine support bundle</li> </ol> Download kubeconfig after shutting down harvester cluster https://harvester.github.io/tests/manual/misc/download-kubeconfig-after-shutting-down-harvester-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/download-kubeconfig-after-shutting-down-harvester-cluster/ - Related issues: #1475 After shutting down the cluster the kubeconfig becomes invalid Category: Host Verification Steps Shutdown harvester node 3, wait for fully power off Shutdown harvester node 2, wait for fully power off Shutdown harvester node 1, wait for fully power off Wait for more than hours or over night Power on node 1 to console page until you see management url Power on node 2 to console page until you see management url + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1475">#1475</a> After shutting down the cluster the kubeconfig becomes invalid</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Host</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Shutdown harvester node 3, wait for fully power off</p> </li> <li> <p>Shutdown harvester node 2, wait for fully power off</p> </li> <li> <p>Shutdown harvester node 1, wait for fully power off</p> </li> <li> <p>Wait for more than hours or over night</p> </li> <li> <p>Power on node 1 to console page until you see management url <img src="https://user-images.githubusercontent.com/29251855/145156486-60507643-8a96-4b4a-862d-367c41665e6b.png" alt="image"></p> Test NTP server timesync https://harvester.github.io/tests/manual/misc/1535-test-ntp-timesync/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1535-test-ntp-timesync/ - Related issues: #1535 NTP daemon in host OS Environment setup This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install Verification Steps SSH into nodes and verify times are close Verify NTP is active with sudo timedatectl status Expected Results Times should be within a minute of each other NTP should show as active + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1535">#1535</a> NTP daemon in host OS</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>This should be on at least a 3 node setup that has been running for several hours that had NTP servers setup during install</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>SSH into nodes and verify times are close</li> <li>Verify NTP is active with <code>sudo timedatectl status</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Times should be within a minute of each other</li> <li>NTP should show as active</li> </ol> Verify network data template https://harvester.github.io/tests/manual/misc/1634-terms-and-conditions-link/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/misc/1634-terms-and-conditions-link/ - Related issues: #1634 Welcome screen asks to agree to T&amp;Cs for using Rancher not Harvester Verification Steps Install Harvester Go to management page and see last line (before Continue button) Verify link to SUSE EULA https://www.suse.com/licensing/eula/ Expected Results Link should go to SUSE EULA + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1634">#1634</a> Welcome screen asks to agree to T&amp;Cs for using Rancher not Harvester</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester</li> <li>Go to management page and see last line (before Continue button)</li> <li>Verify link to SUSE EULA <a href="https://www.suse.com/licensing/eula/">https://www.suse.com/licensing/eula/</a></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Link should go to SUSE EULA <img src="https://user-images.githubusercontent.com/83787952/145657167-2d8ebd33-14d6-4c78-a30f-37075b206219.png" alt="image"></li> </ol> diff --git a/manual/network/index.xml b/manual/network/index.xml index ff23994af..3f13df826 100644 --- a/manual/network/index.xml +++ b/manual/network/index.xml @@ -12,168 +12,168 @@ https://harvester.github.io/tests/manual/network/add-multiple-networks-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-multiple-networks-form/ - Create a new VM via the web form Add both a management network and an external VLAN network Validate both interfaces exist in the VM ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should create You should see three interfaces listed in VM You should get responses from pinging the VM You should get responses from pinging the VM + <ol> <li>Create a new VM via the web form</li> <li>Add both a management network and an external VLAN network</li> <li>Validate both interfaces exist in the VM <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should create</li> <li>You should see three interfaces listed in VM</li> <li>You should get responses from pinging the VM</li> <li>You should get responses from pinging the VM</li> </ol> Add multiple Networks via YAML (e2e_be) https://harvester.github.io/tests/manual/network/add-multiple-networks-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-multiple-networks-yaml/ - Create a new VM via YAML Add both a management network and an external VLAN network Validate both interfaces exist in the VM ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should create You should see three interfaces listed in VM You should get responses from pinging the VM You should get responses from pinging the VM + <ol> <li>Create a new VM via YAML</li> <li>Add both a management network and an external VLAN network</li> <li>Validate both interfaces exist in the VM <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should create</li> <li>You should see three interfaces listed in VM</li> <li>You should get responses from pinging the VM</li> <li>You should get responses from pinging the VM</li> </ol> Add network reachability detection from host for the VLAN network https://harvester.github.io/tests/manual/network/add-network-reachability-detection-from-host-for-vlan-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-network-reachability-detection-from-host-for-vlan-network/ - Related issue: #1476 Add network reachability detection from host for the VLAN network Category: Network Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Steps Enable virtual network with harvester-mgmt in harvester Create VLAN 806 with id 806 and set to default auto mode Import harvester to rancher 1 .Create cloud credential Provision a rke2 cluster to harvester Deploy a nginx server workload Open Service Discover -&gt; Services + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1476">#1476</a> Add network reachability detection from host for the VLAN network</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable virtual network with <code>harvester-mgmt</code> in harvester</li> <li>Create VLAN 806 with id <code>806</code> and set to default <code>auto</code> mode</li> <li>Import harvester to rancher 1 .Create cloud credential</li> <li>Provision a rke2 cluster to harvester <img src="https://user-images.githubusercontent.com/29251855/145564732-0a3cee15-a264-407f-800a-df2e7c649846.png" alt="image"></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/145564961-c921f341-2c88-44cc-9c5e-08789e594552.png" alt="image"></p> Add VLAN network (e2e_be) https://harvester.github.io/tests/manual/network/add-vlan-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/add-vlan-network/ - Environment setup This should be done on a Harvester setup with at least 2 NICs and at least 2 nodes. This is easily tested in Vagrant Verification Steps Open settings on a harvester cluster Navigate to the VLAN settings page Click Enabled Check dropdown for NICs and verify that percentage is showing 100% Add the NIC Click Save Validate that it has updated in settings Expected Results You should be able to add the VLAN network device You should see in the settings list that it has your new default NIC + <h2 id="environment-setup">Environment setup</h2> <p>This should be done on a Harvester setup with at least 2 NICs and at least 2 nodes. This is easily tested in Vagrant</p> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open settings on a harvester cluster</li> <li>Navigate to the VLAN settings page</li> <li>Click Enabled</li> <li>Check dropdown for NICs and verify that percentage is showing 100%</li> <li>Add the NIC</li> <li>Click Save</li> <li>Validate that it has updated in settings</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to add the VLAN network device</li> <li>You should see in the settings list that it has your new default NIC</li> </ol> Create new network (e2e_be_fe) https://harvester.github.io/tests/manual/network/create-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/create-network/ - Navigate to the networks page in harvester Click Create Add a name Add a VLAN ID Click Create Expected Results You should be able to add the VLAN You should see the VLAN show up in the networks page + <ol> <li>Navigate to the networks page in harvester</li> <li>Click Create</li> <li>Add a name</li> <li>Add a VLAN ID</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to add the VLAN</li> <li>You should see the VLAN show up in the networks page</li> </ol> Delete external VLAN network via form https://harvester.github.io/tests/manual/network/delete-vlan-network-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-vlan-network-form/ - On a VM with both an external VLAN and a management VLAN delete the external VLAN via the web form Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the external VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the external VLAN via the web form</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the external VLAN</li> <li>You should get responses from the VM</li> </ol> Delete external VLAN network via YAML (e2e_be) https://harvester.github.io/tests/manual/network/delete-vlan-network-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-vlan-network-yaml/ - On a VM with both an external VLAN and a management VLAN delete the external VLAN via YAML Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the external VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the external VLAN via YAML</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the external VLAN</li> <li>You should get responses from the VM</li> </ol> Delete management network via form https://harvester.github.io/tests/manual/network/delete-management-network-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-management-network-form/ - On a VM with both an external VLAN and a management VLAN delete the management VLAN via the web form Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the management VLAN You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the management VLAN via the web form</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the management VLAN</li> <li>You should get responses from the VM</li> </ol> Delete management network via YAML (e2e_be) https://harvester.github.io/tests/manual/network/delete-management-network-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/delete-management-network-yaml/ - On a VM with both an external VLAN and a management VLAN delete the management network via YAML Validate interface was removed with ip link list Ping the VM from another VM that is only on the management VLAN Ping the VM from an external machine Expected Results The VM should update and reboot You should only see one interface (and the loopback) in the list You should not be able to ping the VM on the management network You should get responses from the VM + <ol> <li>On a VM with both an external VLAN and a management VLAN delete the management network via YAML</li> <li>Validate interface was removed with <ul> <li><code>ip link list</code></li> </ul> </li> <li>Ping the VM from another VM that is only on the management VLAN</li> <li>Ping the VM from an external machine</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should update and reboot</li> <li>You should only see one interface (and the loopback) in the list</li> <li>You should not be able to ping the VM on the management network</li> <li>You should get responses from the VM</li> </ol> Disable and enable vlan cluster network https://harvester.github.io/tests/manual/network/disable-and-enable-vlan-cluster-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/disable-and-enable-vlan-cluster-network/ - Related issues: #1529 Failed to enable vlan cluster network after disable and enable again, display &ldquo;Network Error&rdquo; Category: Network Verification Steps Open settings and config vlan network Enable network and set default harvester-mgmt Disable network Enable network again Check Host, Network and harvester dashboard Repeat above steps several times Expected Results User can disable and enable network with default harvester-mgmt. Harvester dashboard and network work as expected + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1529">#1529</a> Failed to enable vlan cluster network after disable and enable again, display &ldquo;Network Error&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Open settings and config vlan network</li> <li>Enable network and set default harvester-mgmt</li> <li>Disable network</li> <li>Enable network again</li> <li>Check Host, Network and harvester dashboard</li> <li>Repeat above steps several times</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User can disable and enable network with default <code>harvester-mgmt</code>.</li> <li>Harvester dashboard and network work as expected</li> </ol> Edit network via form change external VLAN to management network https://harvester.github.io/tests/manual/network/edit-network-form-change-vlan-to-management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-form-change-vlan-to-management/ - Edit VM and change external VLAN to management network with bridge type via the web form Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change external VLAN to management network with bridge type via the web form</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via form change management network to external VLAN https://harvester.github.io/tests/manual/network/edit-network-form-change-management-to-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-form-change-management-to-vlan/ - Edit VM and change management network to external VLAN with bridge type via the web form Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change management network to external VLAN with bridge type via the web form</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via YAML change external VLAN to management network (e2e_be) https://harvester.github.io/tests/manual/network/edit-network-yaml-change-vlan-to-management/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-yaml-change-vlan-to-management/ - Edit VM and change external VLAN to management network with bridge type via YAML Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change external VLAN to management network with bridge type via YAML</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Edit network via YAML change management network to external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/edit-network-yaml-change-management-to-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/edit-network-yaml-change-management-to-vlan/ - Edit VM and change management network to external VLAN with bridge type via YAML Ping VM Attempt to SSH to VM Expected Results VM should save and reboot You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Edit VM and change management network to external VLAN with bridge type via YAML</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save and reboot</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Enabling vlan on a bonded NIC on vagrant install https://harvester.github.io/tests/manual/network/enabling-vlan-on-bonded-nic-vagrant-install/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/enabling-vlan-on-bonded-nic-vagrant-install/ - Related issues: #1541 Enabling vlan on a bonded NIC breaks the Harvester setup Category: Network Verification Steps Pull ipxe example from https://github.com/harvester/ipxe-examples Vagrant pxe install 3 nodes harvester Access harvester settings page Open settings -&gt; vlan Enable virtual network and set with bond0 Navigate to every page to check harvester is working Create a vlan based on bon0 Expected Results Enable virtual network with bond0 will not make harvester service out of work. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1541">#1541</a> Enabling vlan on a bonded NIC breaks the Harvester setup</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Pull ipxe example from <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Vagrant pxe install 3 nodes harvester</li> <li>Access harvester settings page</li> <li>Open <code>settings</code> -&gt; <code>vlan</code></li> <li>Enable virtual network and set with <code>bond0</code></li> <li>Navigate to every page to check harvester is working</li> <li>Create a vlan based on <code>bon0</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Enable virtual network with <code>bond0</code> will not make harvester service out of work. <img src="https://user-images.githubusercontent.com/29251855/143804059-f8fc0bee-b42a-4daa-b0bb-438b64b75db2.png" alt="image"></p> Negative network comes back up after reboot external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/negative-vlan-after-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-vlan-after-reboot/ - Start pinging the VM reboot the VM Expected Results The VM should respond The VM should reboot The pings should stop getting responses The pings should start getting responses again + <ol> <li>Start pinging the VM</li> <li>reboot the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should respond</li> <li>The VM should reboot</li> <li>The pings should stop getting responses</li> <li>The pings should start getting responses again</li> </ol> Negative network comes back up after reboot management network (e2e_be) https://harvester.github.io/tests/manual/network/negative-management-after-reboot/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-management-after-reboot/ - Start pinging the VM from the management network reboot the VM Expected Results The VM should respond The VM should reboot The pings should stop getting responses The pings should start getting responses again + <ol> <li>Start pinging the VM from the management network</li> <li>reboot the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM should respond</li> <li>The VM should reboot</li> <li>The pings should stop getting responses</li> <li>The pings should start getting responses again</li> </ol> Switch the vlan interface of harvester node https://harvester.github.io/tests/manual/network/switch-the-vlan-interface-of-harvester-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/switch-the-vlan-interface-of-harvester-node/ - Related issues: #1464 VM pods turn to the terminating state after switching the VLAN interface Category: Network Verification Steps User ipxe-example to build up 3 nodes harvester Login harvester dashboard -&gt; Access Settings Enable vlan network with harvester-mgmt NIC interface Create a VM using harvester-mgmt Disable vlan network Enable vlan network and select bond0 interface Check host and vm is working Directly switch network interface from bond0 to harvester-mgmt without disable it. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1464">#1464</a> VM pods turn to the terminating state after switching the VLAN interface</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>User ipxe-example to build up 3 nodes harvester</li> <li>Login harvester dashboard -&gt; Access Settings</li> <li>Enable vlan network with <code>harvester-mgmt</code> NIC interface</li> <li>Create a VM using <code>harvester-mgmt</code></li> <li>Disable vlan network</li> <li>Enable vlan network and select <code>bond0</code> interface <img src="https://user-images.githubusercontent.com/29251855/144204800-ed20ab79-0c18-4a70-b258-2468d62e072d.png" alt="image"></li> <li>Check host and vm is working</li> <li>Directly switch network interface from <code>bond0</code> to <code>harvester-mgmt</code> without disable it. <img src="https://user-images.githubusercontent.com/29251855/144206080-cbba3e29-b125-422a-b629-9a412a218feb.png" alt="image"></li> <li>Check host and vm is working</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Switch the VLAN interface of this host can&rsquo;t affect Host and VM operation.</li> <li>All harvester node keep in <code>running</code> status</li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/144206164-092272aa-0488-40f4-bb3d-4a1aea5fdb5d.png" alt="image"></p> Try to add a network with no name (e2e_be) https://harvester.github.io/tests/manual/network/negative-add-network-no-name/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-add-network-no-name/ - Navigate to the networks page in harvester Click Create Don&rsquo;t add a name Add a VLAN ID Click Create Expected Results You should get an error that says you need to add a name + <ol> <li>Navigate to the networks page in harvester</li> <li>Click Create</li> <li>Don&rsquo;t add a name</li> <li>Add a VLAN ID</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error that says you need to add a name</li> </ol> Validate network connectivity external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/validate-network-external-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/validate-network-external-vlan/ - Create a new VM Make sure that the network is set to the external VLAN with bridge as the type Ping VM Attempt to SSH to VM Expected Results VM should be created You should be able to ping the VM from an external network You should be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the external VLAN with bridge as the type</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should be able to ping the VM from an external network</li> <li>You should be able to SSH to VM</li> </ol> Validate network connectivity invalid external VLAN (e2e_be) https://harvester.github.io/tests/manual/network/negative-network-connectivity-invalid-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/negative-network-connectivity-invalid-vlan/ - Create a new VM Make sure that the network is set to the external VLAN with bridge as the type and a VLAN ID that isn&rsquo;t valid for your network Ping VM Attempt to SSH to VM Expected Results VM should be created You should not be able to ping the VM from an external network You should not be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the external VLAN with bridge as the type and a VLAN ID that isn&rsquo;t valid for your network</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should not be able to ping the VM from an external network</li> <li>You should not be able to SSH to VM</li> </ol> Validate network connectivity management network (e2e_be) https://harvester.github.io/tests/manual/network/validate-network-management-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/validate-network-management-network/ - Create a new VM Make sure that the network is set to the management network with masquerade as the type Ping VM Attempt to SSH to VM Expected Results VM should be created You should not be able to ping the VM from an external network You should not be able to SSH to VM + <ol> <li>Create a new VM</li> <li>Make sure that the network is set to the management network with masquerade as the type</li> <li>Ping VM</li> <li>Attempt to SSH to VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should be created</li> <li>You should not be able to ping the VM from an external network</li> <li>You should not be able to SSH to VM</li> </ol> VIP configured in a VLAN network should be reached https://harvester.github.io/tests/manual/network/vip-configured-on-vlan-network-should-be-reached/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/vip-configured-on-vlan-network-should-be-reached/ - Related issue: #1424 VIP configured in a VLAN network can not be reached Category: Network Environment Setup The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan Verification Steps Enable virtual network with harvester-mgmt Open Network -&gt; Create a virtual network Provide network name and correct vlan id Open Route, use the default auto setting Create a VM and use the created route SSH to harvester node Ping the IP of the created VM Create a virutal network Provide network name and correct vlan id Open Route, use the manual setting Provide the CIDR and Gateway value Repeat step 5 - 7 Expected Results Check the auto route vlan can be detected with running status Check the manual route vlan can be detected with running status Check the VM can get IP based on auto or manual vlan route Check can ping VM IP from harvester node + <ul> <li>Related issue: <a href="https://github.com/harvester/harvester/issues/1424">#1424</a> VIP configured in a VLAN network can not be reached</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Network</li> </ul> <h2 id="environment-setup">Environment Setup</h2> <ul> <li>The network environment must have vlan network configured and also have DHCP server prepared on your testing vlan</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Enable virtual network with <code>harvester-mgmt</code></li> <li>Open Network -&gt; Create a virtual network</li> <li>Provide network name and correct vlan id <img src="https://user-images.githubusercontent.com/29251855/148182659-5b0f0d14-2654-4123-a417-4bd4e101b597.png" alt="image"></li> <li>Open Route, use the default <code>auto</code> setting <img src="https://user-images.githubusercontent.com/29251855/148182727-a445667c-fc78-4c83-a3d5-0238b8d2b17c.png" alt="image"></li> <li>Create a VM and use the created route</li> <li>SSH to harvester node</li> <li>Ping the IP of the created VM</li> <li>Create a virutal network</li> <li>Provide network name and correct vlan id</li> <li>Open Route, use the <code>manual</code> setting</li> <li>Provide the <code>CIDR</code> and <code>Gateway</code> value <img src="https://user-images.githubusercontent.com/29251855/148185885-b2c5b075-bd08-4fd6-97ad-7485a67e9339.png" alt="image"></li> <li>Repeat step 5 - 7</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Check the <code>auto</code> route vlan can be detected with <code>running</code> status <img src="https://user-images.githubusercontent.com/29251855/148183159-1242ad24-ee44-4428-8592-abdfa5d863fc.png" alt="image"></li> <li>Check the <code>manual</code> route vlan can be detected with <code>running</code> status</li> <li>Check the VM can get IP based on <code>auto</code> or <code>manual</code> vlan route</li> <li>Check can ping VM IP from harvester node</li> </ol> VIP is accessibility with VLAN enabled on management port https://harvester.github.io/tests/manual/network/vip_vlan_mgmtport/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/network/vip_vlan_mgmtport/ - Ref: https://github.com/harvester/harvester/issues/1722 Verify Items VIP should be accessible when VLAN enabled on management port Case: Single Node enables VLAN on management port Install Harvester with single node Login to dashboard then navigate to Settings Edit vlan to enable VLAN on harvester-mgmt reboot the node after reboot, login to console Run the command should not contain any output sudo -s kubectl get pods -A --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' | grep harvester-network-controller-manager | xargs kubectl logs -n harvester-system | grep &quot;Failed to update lock&quot; Repeat step 4-6 with 10 times, should not have any error + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1722">https://github.com/harvester/harvester/issues/1722</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VIP should be accessible when VLAN enabled on management port</li> </ul> <h2 id="case-single-node-enables-vlan-on-management-port">Case: Single Node enables VLAN on management port</h2> <ol> <li>Install Harvester with single node</li> <li>Login to dashboard then navigate to Settings</li> <li>Edit <strong>vlan</strong> to enable VLAN on <code>harvester-mgmt</code></li> <li>reboot the node</li> <li>after reboot, login to console</li> <li>Run the command should not contain any output <ul> <li><code>sudo -s</code></li> <li><code>kubectl get pods -A --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' | grep harvester-network-controller-manager | xargs kubectl logs -n harvester-system | grep &quot;Failed to update lock&quot;</code></li> </ul> </li> <li>Repeat step 4-6 with 10 times, should not have any error</li> </ol> diff --git a/manual/node-driver/index.xml b/manual/node-driver/index.xml index fdb96f236..a5b0eef64 100644 --- a/manual/node-driver/index.xml +++ b/manual/node-driver/index.xml @@ -12,196 +12,196 @@ https://harvester.github.io/tests/manual/node-driver/cluster-custom-docker-install-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-docker-install-url/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Insecure Registries" https://harvester.github.io/tests/manual/node-driver/cluster-custom-insecure-registries/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-insecure-registries/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute docker info, check the &ldquo;Insecure Registries&rdquo; setting is &ldquo;harbor.wujing.site&rdquo; Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute <code>docker info</code>, check the &ldquo;Insecure Registries&rdquo; setting is &ldquo;harbor.wujing.site&rdquo;</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Registry Mirrors" https://harvester.github.io/tests/manual/node-driver/cluster-custom-registry-mirrors/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-registry-mirrors/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute &ldquo;docker info&rdquo;, check the &ldquo;Registry Mirrors&rdquo; setting is &ldquo;https://s06nkgus.mirror.aliyuncs.com&rdquo; Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute &ldquo;docker info&rdquo;, check the &ldquo;Registry Mirrors&rdquo; setting is &ldquo;<a href="https://s06nkgus.mirror.aliyuncs.com">https://s06nkgus.mirror.aliyuncs.com</a>&rdquo;</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add a custom "Storage Driver" https://harvester.github.io/tests/manual/node-driver/cluster-custom-storage-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-custom-storage-driver/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Go to node, execute &ldquo;docker info&rdquo;, check the Storage Driver setting is overlay Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Go to node, execute &ldquo;docker info&rdquo;, check the Storage Driver setting is overlay</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Add cluster driver https://harvester.github.io/tests/manual/node-driver/add-cluster-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/add-cluster-driver/ - Cluster Management &gt; Drivers &gt; Node Drivers Click &ldquo;Add Node driver&rdquo; Add the correct configuration and save Expected Results Created successfully, status is active + <ol> <li>Cluster Management &gt; Drivers &gt; Node Drivers</li> <li>Click &ldquo;Add Node driver&rdquo;</li> <li>Add the correct configuration and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Created successfully, status is active</li> </ol> Add the different roles to the cluster https://harvester.github.io/tests/manual/node-driver/q-cluster-different-roles/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-different-roles/ - Create three users user1, user2, user3 Give the roles of Cluster Owner to user1, Create Project to user2 and Cluster Member to user3 respectively. Login with these three roles Expected Results + <ol> <li>Create three users user1, user2, user3</li> <li>Give the roles of Cluster Owner to user1, Create Project to user2 and Cluster Member to user3 respectively.</li> <li>Login with these three roles</li> </ol> <h2 id="expected-results">Expected Results</h2> Add/remove a node in the created harvester cluster https://harvester.github.io/tests/manual/node-driver/cluster-add-remove-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-add-remove-node/ - add/remove a node in the created harvester cluster Expected Results rancher on the cluster modified successfully harvester corresponding VM node added/removed successfully + <ol> <li>add/remove a node in the created harvester cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>rancher on the cluster modified successfully</li> <li>harvester corresponding VM node added/removed successfully</li> </ol> Backup and restore of harvester cluster https://harvester.github.io/tests/manual/node-driver/q-cluster-backup-restore/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-backup-restore/ - create a deployment in harvester cluster Go to the rancher&rsquo;s cluster list and make a backup of the harvester cluster After the backup is complete, delete the deployment created in the harvester cluster go to the list of clusters in the rancher and restore the harvester cluster Expected Results + <ol> <li>create a deployment in harvester cluster</li> <li>Go to the rancher&rsquo;s cluster list and make a backup of the harvester cluster</li> <li>After the backup is complete, delete the deployment created in the harvester cluster</li> <li>go to the list of clusters in the rancher and restore the harvester cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> Basic functional verification of Harvester cluster after creation https://harvester.github.io/tests/manual/node-driver/verify-cluster-functionality/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/verify-cluster-functionality/ - create the project. deploy deployment Expected Results The project is created successfully Deployment successfully deployed + <ol> <li>create the project.</li> <li>deploy deployment</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The project is created successfully</li> <li>Deployment successfully deployed</li> </ol> Cluster add labs https://harvester.github.io/tests/manual/node-driver/create-add-labs/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-add-labs/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results Use the command &ldquo;kubectl get node &ndash;show-labels&rdquo; to see the success of the added tabs Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Labels Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Use the command &ldquo;kubectl get node &ndash;show-labels&rdquo; to see the success of the added tabs</li> <li>Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Labels</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Cluster add Taints https://harvester.github.io/tests/manual/node-driver/cluster-add-taints/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-add-taints/ - add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results Use the command kubectl describe node test-tain5 | grep Taint to see if Taint was added successfully. Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Taint Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Use the command <code>kubectl describe node test-tain5 | grep Taint</code> to see if Taint was added successfully.</li> <li>Go to the node details page of UI, click the &ldquo;Edit Node&rdquo; button, and check Taint</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create a 3 nodes harvester cluster with RKE1 (only with mandatory info, other values stays with default) https://harvester.github.io/tests/manual/node-driver/create-3-node-rke1/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-3-node-rke1/ - From the Rancher home page, click on Create Select RKE1 on the right and click on Harvester Enter a cluster name Give a prefix name for the VMs Increase count to 3 nodes Check etcd, Control Plane and Worker boxes Select or create a node template if needed Click on Add node template Create credentials by selecting your harvester cluster Fill the instance option fields, pay attention to correctly write the default ssh user of the chosen image in the SSH user field Give a name to the rancher template and click on Create Click on create to spin the cluster up Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The 3 nodes should be with the active status + <ol> <li>From the Rancher home page, click on Create</li> <li>Select RKE1 on the right and click on Harvester</li> <li>Enter a cluster name</li> <li>Give a prefix name for the VMs</li> <li>Increase count to 3 nodes</li> <li>Check etcd, Control Plane and Worker boxes</li> <li>Select or create a node template if needed <ul> <li>Click on Add node template</li> <li>Create credentials by selecting your harvester cluster</li> <li>Fill the instance option fields, pay attention to correctly write the default ssh user of the chosen image in the SSH user field</li> <li>Give a name to the rancher template and click on Create</li> </ul> </li> <li>Click on create to spin the cluster up</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The 3 nodes should be with the active status</li> </ol> Create a 3 nodes harvester cluster with RKE2 (only with mandatory info, other values stays with default) https://harvester.github.io/tests/manual/node-driver/create-3-node-rke2/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/create-3-node-rke2/ - From the Rancher home page, click on Create Select RKE2 on the right and click on Harvester Create the credential to talk with the harvester provider Select your harvester cluster (external or internal) Enter a cluster name Increase machine count to 3 Fill the mandatory fields Namespace Image Network SSH User (default ssh user of the chosen image) Click on create to spin the cluster up Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The 3 nodes should be with the active status + <ol> <li>From the Rancher home page, click on Create</li> <li>Select RKE2 on the right and click on Harvester</li> <li>Create the credential to talk with the harvester provider <ul> <li>Select your harvester cluster (external or internal)</li> </ul> </li> <li>Enter a cluster name</li> <li>Increase machine count to 3</li> <li>Fill the mandatory fields <ul> <li>Namespace</li> <li>Image</li> <li>Network</li> <li>SSH User (default ssh user of the chosen image)</li> </ul> </li> <li>Click on create to spin the cluster up</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The 3 nodes should be with the active status</li> </ol> Create a harvester cluster and add Taint to a node https://harvester.github.io/tests/manual/node-driver/q-cluster-add-taint/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/q-cluster-add-taint/ - Expected Results + <h2 id="expected-results">Expected Results</h2> Create a harvester cluster with 3 master nodes https://harvester.github.io/tests/manual/node-driver/add-3-master-nodes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/add-3-master-nodes/ - add a harvester node template Create harvester cluster count set to 3 Expected Results The status of the created cluster shows active show the 3 created node status running in harvester&rsquo;s vm list the information displayed on rancher and harvester matches the template configuration + <ol> <li>add a harvester node template</li> <li>Create harvester cluster count set to 3</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>show the 3 created node status running in harvester&rsquo;s vm list</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> Create a harvester cluster with a non-default version of k8s https://harvester.github.io/tests/manual/node-driver/cluster-non-default-k8s/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-non-default-k8s/ - Verify versions 1.19.10, 1.18.18, 1.17.17, 1.16.15 respectively Expected Results k8s displayed on the UI is consistent with the created version (cluster list, host list) Use kubectl version to see that the version information is the same as the created version + <ol> <li>Verify versions <code>1.19.10</code>, <code>1.18.18</code>, <code>1.17.17</code>, <code>1.16.15</code> respectively</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>k8s displayed on the UI is consistent with the created version (cluster list, host list)</li> <li>Use <code>kubectl version</code> to see that the version information is the same as the created version</li> </ol> Create a harvester cluster with different images https://harvester.github.io/tests/manual/node-driver/cluster-different-images/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-images/ - d a harvester node template Set the image, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values ubuntu-18.04-server-cloudimg-amd64.img focal-server-cloudimg-amd64-disk-kvm.img Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration The drop-down list of images in the harvester node template corresponds to the list of images in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>d a harvester node template</li> <li>Set the image, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>ubuntu-18.04-server-cloudimg-amd64.img</li> <li>focal-server-cloudimg-amd64-disk-kvm.img</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of images in the harvester node template corresponds to the list of images in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create a harvester cluster, template drop-down list validation https://harvester.github.io/tests/manual/node-driver/cluster-template-dropdown-multi-user/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-template-dropdown-multi-user/ - Create multiple harvester Node Templates with different users Add harvester cluster and set Template Expected Results pop up a template list pop-up box Show the templates you created and the templates created by other users + <ol> <li>Create multiple harvester Node Templates with different users</li> <li>Add harvester cluster and set Template</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>pop up a template list pop-up box</li> <li>Show the templates you created and the templates created by other users</li> </ol> Create harvester cluster using non-default CPUs, Memory, Disk https://harvester.github.io/tests/manual/node-driver/cluster-non-default-resources/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-non-default-resources/ - add a harvester node template The set CPUs, Memory, and Disk values, refer to &ldquo;Test Data&rdquo; for other values Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:4 Memorys:8 Disk:50 Bus:Virtlo Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>The set CPUs, Memory, and Disk values, refer to &ldquo;Test Data&rdquo; for other values</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:4 Memorys:8 Disk:50 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create harvester clusters with different Bus https://harvester.github.io/tests/manual/node-driver/cluster-different-bus/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-bus/ - add a harvester node template Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values VirtIO SATA SCSI Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration The drop-down list of &ldquo;BUS&rdquo; in the harvester node template corresponds to the list of “BUS” in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>VirtIO</li> <li>SATA</li> <li>SCSI</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of &ldquo;BUS&rdquo; in the harvester node template corresponds to the list of “BUS” in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Create harvester clusters with different Networks https://harvester.github.io/tests/manual/node-driver/cluster-different-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-different-networks/ - add a harvester node template Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values vlan1 vlan2 Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration The drop-down list of &ldquo;Network Name&rdquo; in the harvester node template corresponds to the list of “Network Name” in the harvester Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>add a harvester node template</li> <li>Set the “Network Name”, it should be a drop-down list, refer to &ldquo;Test Data&rdquo; for other values <ul> <li>vlan1</li> <li>vlan2</li> </ul> </li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>The drop-down list of &ldquo;Network Name&rdquo; in the harvester node template corresponds to the list of “Network Name” in the harvester</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1/vlan2 SSH User: opensuse </code></pr Deactivate/activate/delete Harvester Node Driver https://harvester.github.io/tests/manual/node-driver/deactivate-activate-deletenode-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/deactivate-activate-deletenode-driver/ - With Rancher &lt; 2.6: Tools-&gt;Driver Management→Node Driver Deactivate/activate/delete Harvester Node Driver With Rancher 2.6: Cluster Management &gt; Drivers &gt; Node Drivers Deactivate/activate/delete Harvester Node Driver Expected Results Harvester icon is not visible when creating a cluster / Harvester icon is visible when creating a cluster /Harvester icon is not visible when creating a cluster + <p>With Rancher &lt; 2.6:</p> <ol> <li>Tools-&gt;Driver Management→Node Driver</li> <li>Deactivate/activate/delete Harvester Node Driver With Rancher 2.6:</li> <li>Cluster Management &gt; Drivers &gt; Node Drivers</li> <li>Deactivate/activate/delete Harvester Node Driver</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Harvester icon is not visible when creating a cluster / Harvester icon is visible when creating a cluster /Harvester icon is not visible when creating a cluster</li> </ol> Delete Cluster https://harvester.github.io/tests/manual/node-driver/cluster-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/cluster-delete/ - Delete Cluster Expected Results successful cluster deletion in rancher the corresponding VM node in harvester is deleted successfully + <ol> <li>Delete Cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>successful cluster deletion in rancher</li> <li>the corresponding VM node in harvester is deleted successfully</li> </ol> Guest CSI Driver https://harvester.github.io/tests/manual/node-driver/guest-csi-driver/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/guest-csi-driver/ - Start rancher using docker in a vm and start harvester in another Import harvester into rancher from &ldquo;Virtualization Management&rdquo; page On rancher, enable harvester node driver at &ldquo;Cluster Management&rdquo; -&gt; &ldquo;Drivers&rdquo; -&gt; &ldquo;Node Driver&rdquo; Go back to &ldquo;Cluster Management&rdquo; and create a rke2 cluster using Harvester Once the created cluster is active on the &ldquo;Cluster Management&rdquo; page, click on the &ldquo;Explore&rdquo; Go to &ldquo;Workload&rdquo; -&gt; &ldquo;Deployment&rdquo; and &ldquo;Create&rdquo; a new deployment, during which in the page of &ldquo;Storage&rdquo;, click on &ldquo;Add Volume&rdquo; and select &ldquo;Create Persistent Volume Claim&rdquo; and select &ldquo;Harvester&rdquo; in the &ldquo;Storage Class&rdquo; Click &ldquo;Create&rdquo; to create the deployment Verify that on the Harvester side, a new volume is created. + <ol> <li>Start rancher using docker in a vm and start harvester in another</li> <li>Import harvester into rancher from &ldquo;Virtualization Management&rdquo; page</li> <li>On rancher, enable harvester node driver at &ldquo;Cluster Management&rdquo; -&gt; &ldquo;Drivers&rdquo; -&gt; &ldquo;Node Driver&rdquo;</li> <li>Go back to &ldquo;Cluster Management&rdquo; and create a rke2 cluster using Harvester</li> <li>Once the created cluster is active on the &ldquo;Cluster Management&rdquo; page, click on the &ldquo;Explore&rdquo;</li> <li>Go to &ldquo;Workload&rdquo; -&gt; &ldquo;Deployment&rdquo; and &ldquo;Create&rdquo; a new deployment, during which in the page of &ldquo;Storage&rdquo;, click on &ldquo;Add Volume&rdquo; and select &ldquo;Create Persistent Volume Claim&rdquo; and select &ldquo;Harvester&rdquo; in the &ldquo;Storage Class&rdquo;</li> <li>Click &ldquo;Create&rdquo; to create the deployment</li> <li>Verify that on the Harvester side, a new volume is created.</li> <li>Delete the created deployment and then delete the created pvc. Verify on the harvester side that the newly created volume is also deleted. create another deployment, say nginx:latest with 8GB storage created as step 6.</li> <li>&ldquo;Execute shell&rdquo; into the deployment above and use &ldquo;dd&rdquo; command to test the read &amp; write speed in the directory where the pvc is mounted: <ul> <li><code>dd if=/dev/zero of=tempfile bs=1M count=5120</code></li> <li><code>dd if=/dev/null of=tempfile bs=1M count=5120</code></li> </ul> </li> <li>SSH into a VM created on the bare metal and run the same <code>dd</code> command <ul> <li><code>dd if=/dev/zero of=tempfile bs=1M count=5120</code></li> <li><code>dd if=/dev/null of=tempfile bs=1M count=5120</code></li> </ul> </li> <li>Scale down the above deployment to 0 replica and resize the pvc to 15GB on the harvester side:</li> <li>Double check the pvc is resized on the longhorn side.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The cluster should have similar storage speed performance</li> <li>The PVC should resize and show it in the Longhorn UI</li> </ol> Import External Harvester https://harvester.github.io/tests/manual/node-driver/import-external-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/import-external-harvester/ - With Rancher &lt; 2.6: Deploy the rancher and harvester clusters separately In the rancher, add a harvester node template Select &ldquo;External Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings. Use this template to create the corresponding cluster With Rancher 2.6: Home page / Import Existing / Generic Add cluster name and click on Create Follow the registration steps Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access External Harvester Host: Port: 443 Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15. + <p>With Rancher &lt; 2.6:</p> <ol> <li>Deploy the rancher and harvester clusters separately</li> <li>In the rancher, add a harvester node template</li> <li>Select &ldquo;External Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings.</li> <li>Use this template to create the corresponding cluster With Rancher 2.6:</li> <li>Home page / Import Existing / Generic</li> <li>Add cluster name and click on Create</li> <li>Follow the registration steps</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>External Harvester</li> <li>Host: <!-- raw HTML omitted --> Port: 443</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Import internal harvester https://harvester.github.io/tests/manual/node-driver/import-internal-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/import-internal-harvester/ - enable harvester&rsquo;s rancher-enabled setting Click the rancher button in the upper right corner to access the internal rancher add a harvester node template Select &ldquo;Internal Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active the status of the corresponding vm on harvester active the information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15. + <ol> <li>enable harvester&rsquo;s rancher-enabled setting</li> <li>Click the rancher button in the upper right corner to access the internal rancher</li> <li>add a harvester node template</li> <li>Select &ldquo;Internal Harvester&rdquo;, and refer to &ldquo;Test Data&rdquo; for other value settings.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>the status of the corresponding vm on harvester active</li> <li>the information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Use a non-admin user https://harvester.github.io/tests/manual/node-driver/non-admin-user/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/non-admin-user/ - create harvester user ltang, password ltang add a harvester node template Refer to the &ldquo;Test Data&rdquo; value setting. Use this template to create the corresponding cluster Expected Results The status of the created cluster shows active The status of the corresponding vm on harvester active The information displayed on rancher and harvester matches the template configuration Test Data Harvester Node Template HARVESTER OPTIONS Account Access Internal Harvester Username:admin Password:admin Instance Options CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15. + <ol> <li>create harvester user ltang, password ltang</li> <li>add a harvester node template</li> <li>Refer to the &ldquo;Test Data&rdquo; value setting.</li> <li>Use this template to create the corresponding cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>The status of the corresponding vm on harvester active</li> <li>The information displayed on rancher and harvester matches the template configuration</li> </ol> <h2 id="test-data">Test Data</h2> <h3 id="harvester-node-template">Harvester Node Template</h3> <h3 id="harvester-options">HARVESTER OPTIONS</h3> <ul> <li>Account Access</li> <li>Internal Harvester</li> <li>Username:admin</li> <li>Password:admin</li> <li>Instance Options <pre tabindex="0"><code>CPUs:2 Memorys:4 Disk:40 Bus:Virtlo/SATA/SCSI Image: openSUSE-Leap-15.3.x86_64-NoCloud.qcow2 Network Name: vlan1 SSH User: opensuse </code></pr Verify "Add Node Pool" https://harvester.github.io/tests/manual/node-driver/verify-add-node-pool/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/node-driver/verify-add-node-pool/ - Create a cluster of 3 nodes, One node with etcd, Control Plane, Worker, the other two with Worker The cluster is created successfully, use the command kubectl get node to view the node roles Expected Results The status of the created cluster shows active show the 3 created node status running in harvester&rsquo;s vm list the information displayed on rancher and harvester matches the template configuration Check that the node role is correct + <ol> <li>Create a cluster of 3 nodes, One node with etcd, Control Plane, Worker, the other two with Worker</li> <li>The cluster is created successfully, use the command <code>kubectl get node</code> to view the node roles</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The status of the created cluster shows active</li> <li>show the 3 created node status running in harvester&rsquo;s vm list</li> <li>the information displayed on rancher and harvester matches the template configuration</li> <li>Check that the node role is correct</li> </ol> diff --git a/manual/templates/index.xml b/manual/templates/index.xml index 29a26fdcf..da8435f21 100644 --- a/manual/templates/index.xml +++ b/manual/templates/index.xml @@ -12,35 +12,35 @@ https://harvester.github.io/tests/manual/templates/allow-users-to-create-cloud-config-template-on-vm-creating-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/allow-users-to-create-cloud-config-template-on-vm-creating-page/ - Related issues: #1433 allow users to create cloud-config template on the VM creating page Category: Virtual Machine Verification Steps Create a new virtual machine Click advanced options Drop down user data template -&gt; create new Drop down network data template -&gt; create new Expected Results User can create user and network data template when create virtual machine Created cloud-init template template can be saved and auto selected to the latest one + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1433">#1433</a> allow users to create cloud-config template on the VM creating page</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a new virtual machine</li> <li>Click advanced options</li> <li>Drop down user data template -&gt; create new</li> <li>Drop down network data template -&gt; create new</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>User can create user and network data template when create virtual machine <img src="https://user-images.githubusercontent.com/29251855/139009117-9c191986-2253-4bff-b73f-962eabe2b2d9.png" alt="image"> Created cloud-init template</li> <li>template can be saved and auto selected to the latest one <img src="https://user-images.githubusercontent.com/29251855/139008946-97f0d528-c5b9-4add-82d9-4105bd51f0c5.png" alt="image"></li> </ol> Chain VM templates and images https://harvester.github.io/tests/manual/templates/760-chained-vm-templates/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/760-chained-vm-templates/ - Related issues: #760 cloud config byte limit Verification Steps Create a vm and add userData or networkData, test if it works Run VM health checks create a vm template and add userData create a new vm and use the template Run VM health checks use the existing vm to generate a template, then use the template to create a new vm Run VM health Checks Expected Results All VM&rsquo;s should create All VM Health Checks should pass + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/760">#760</a> cloud config byte limit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a vm and add userData or networkData, test if it works</li> <li>Run VM health checks</li> <li>create a vm template and add userData create a new vm and use the template</li> <li>Run VM health checks</li> <li>use the existing vm to generate a template, then use the template to create a new vm</li> <li>Run VM health Checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All VM&rsquo;s should create</li> <li>All VM Health Checks should pass</li> </ol> Create SSH key from templates page https://harvester.github.io/tests/manual/templates/1619-create-ssh-key-from-templates-page/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/1619-create-ssh-key-from-templates-page/ - Related issues: #1619 User is unable to create ssh key through the templates page Verification Steps on a harvester deployment, navigate to advanced -&gt; templates and click create Click create new under SSH section enter valid credentials and save Expected Results SSH key should be created and show in the SSH key section + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1619">#1619</a> User is unable to create ssh key through the templates page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>on a harvester deployment, navigate to advanced -&gt; templates and click create</li> <li>Click create new under SSH section</li> <li>enter valid credentials and save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH key should be created and show in the SSH key section</li> </ol> Verify network data template https://harvester.github.io/tests/manual/templates/1655-network-data-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/1655-network-data-template/ - Related issues: #1655 When using a VM Template the Network Data in the template is not displayed Verification Steps Create new VM template with network data in advanced settings network: version: 1 config: - type: physical name: interface0 subnets: - type: static address: 10.84.99.0/24 gateway: 10.84.99.254 Create new VM and select template Verify that network data is in advanced network config Expected Results network data should show + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1655">#1655</a> When using a VM Template the Network Data in the template is not displayed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new VM template with network data in advanced settings</li> </ol> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: interface0 subnets: - type: static address: 10.84.99.0/24 gateway: 10.84.99.254 </code></pr Volume size should be editable on derived template https://harvester.github.io/tests/manual/templates/derived_template_configure/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/templates/derived_template_configure/ - Ref: https://github.com/harvester/harvester/issues/1711 Verify Items Volume size can be changed when creating a derived template Case: Update volume size on new template derived from exist template Install Harvester with any Nodes Login to Dashboard Create Image for Template Creation Create Template T1 with Image Volume and additional Volume Modify Template T1 with update Volume size Volume size should be editable Click Save, then edit new version of T1 Volume size should be updated as expected + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1711">https://github.com/harvester/harvester/issues/1711</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Volume size can be changed when creating a derived template</li> </ul> <h2 id="case-update-volume-size-on-new-template-derived-from-exist-template">Case: Update volume size on new template derived from exist template</h2> <ol> <li>Install Harvester with any Nodes</li> <li>Login to Dashboard</li> <li>Create Image for Template Creation</li> <li>Create Template <code>T1</code> with <em>Image Volume</em> and additional <em>Volume</em></li> <li>Modify Template <code>T1</code> with update <em>Volume</em> size</li> <li>Volume size should be editable</li> <li>Click Save, then edit new version of <code>T1</code></li> <li>Volume size should be updated as expected</li> </ol> diff --git a/manual/terraform-provider/index.xml b/manual/terraform-provider/index.xml index 441799f32..d269fa8fc 100644 --- a/manual/terraform-provider/index.xml +++ b/manual/terraform-provider/index.xml @@ -12,70 +12,70 @@ https://harvester.github.io/tests/manual/terraform-provider/install-terraform-provider/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/install-terraform-provider/ - Follow the instruction of the README Expected Results The provider is initialized and the terraform init command succeeds: Initializing provider plugins... - Finding harvester/harvester versions matching &#34;~&gt; 0.1.0&#34;... - Installing harvester/harvester v0.1.0... - Installed harvester/harvester v0.1.0 (unauthenticated) ... Terraform has been successfully initialized! + <p>Follow the instruction of the <a href="https://github.com/harvester/terraform-provider-harvester#install-the-provider">README</a></p> <h2 id="expected-results">Expected Results</h2> <p>The provider is initialized and the terraform init command succeeds:</p> <pre tabindex="0"><code>Initializing provider plugins... - Finding harvester/harvester versions matching &#34;~&gt; 0.1.0&#34;... - Installing harvester/harvester v0.1.0... - Installed harvester/harvester v0.1.0 (unauthenticated) ... Terraform has been successfully initialized! </code></pre> Target Harvester by setting the variable kubeconfig with your kubeconfig file in the provider.tf file (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-variasble/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-variasble/ - Define the kubeconfig variable in the provider.tf file terraform { required_providers { harvester = { source = &#34;registry.terraform.io/harvester/harvester&#34; version = &#34;~&gt; 0.1.0&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;/path/of/my/kubeconfig&#34; } Check if you can interact with the Harvester by creating resource like a SSH key Execute the terraform apply command Expected Results The resource should be created Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Check if you can see your resource in the Harvester WebUI + <ol> <li>Define the kubeconfig variable in the provider.tf file</li> </ol> <pre tabindex="0"><code>terraform { required_providers { harvester = { source = &#34;registry.terraform.io/harvester/harvester&#34; version = &#34;~&gt; 0.1.0&#34; } } } provider &#34;harvester&#34; { kubeconfig = &#34;/path/of/my/kubeconfig&#34; } </code></pre><ol> <li>Check if you can interact with the Harvester by creating resource like a SSH key</li> <li>Execute the <code>terraform apply</code> command</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource should be created <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>Check if you can see your resource in the Harvester WebUI</li> </ol> Target Harvester with the default kubeconfig located in $HOME/.kube/config (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-home/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-kubeconfig-home/ - Make sure the kubeconfig is defined in the file $HOME/.kube/config Check if you can interact with the Harvester by creating resource like a SSH key Execute the terraform apply command Expected Results The resource should be created Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Check if you can see your resource in the Harvester WebUI + <ol> <li>Make sure the kubeconfig is defined in the file <code>$HOME/.kube/config</code></li> <li>Check if you can interact with the Harvester by creating resource like a SSH key</li> <li>Execute the <code>terraform apply</code> command</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource should be created <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>Check if you can see your resource in the Harvester WebUI</li> </ol> Test a deployment with ALL resources at the same time (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/deployment-all-resources/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/deployment-all-resources/ - Re-use the previous generated TF files and group them all either in one directory or in the same file Generates a speculative execution plan with terraform plan command Create the resources with terraform apply command Check that all resources are correctly created/running on the Harvester cluster Destroy the resources with the command terraform destroy Expected Results Refer to the harvester_ssh_key resource expected results + <ol> <li>Re-use the previous generated TF files and group them all either in one directory or in the same file</li> <li>Generates a speculative execution plan with terraform plan command</li> <li>Create the resources with terraform apply command</li> <li>Check that all resources are correctly created/running on the Harvester cluster</li> <li>Destroy the resources with the command terraform destroy</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Refer to the harvester_ssh_key resource expected results</p> Test the harvester_clusternetwork resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-clusternetwork-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-clusternetwork-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_image resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-image-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-image-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_network resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-network-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-network-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_ssh_key resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-ssh-key-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-ssh-key-resource/ - These following steps must be done for every resources, for avoiding repetitions, look at the detailed instructions at the beginning of the page. Import a resource Generates a speculative execution plan with terraform plan command Create the resource with terraform apply command Use terraform plan again Use terraform apply again Destroy the resource with the command terraform destroy Expected Results The resource is well imported in the terraform.tfstate file and you can print it with the terraform show command The command should display the difference between the actual status and the configured status Plan: 1 to add, 0 to change, 0 to destroy. + <p>These following steps must be done for every resources, for avoiding repetitions, look at the detailed instructions at the beginning of the page.</p> <ol> <li>Import a resource</li> <li>Generates a speculative execution plan with terraform plan command</li> <li>Create the resource with terraform apply command</li> <li>Use terraform plan again</li> <li>Use terraform apply again</li> <li>Destroy the resource with the command terraform destroy</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The resource is well imported in the terraform.tfstate file and you can print it with the terraform show command</li> <li>The command should display the difference between the actual status and the configured status <code>Plan: 1 to add, 0 to change, 0 to destroy.</code> <code>Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</code></li> <li>You must see the new resource(s) on the Harvester dashboard`</li> <li><code>No changes. Your infrastructure matches the configuration.</code></li> <li><code>Apply complete! Resources: 0 added, 0 changed, 0 destroyed.</code></li> <li><code>Destroy complete! Resources: 1 destroyed.</code></li> </ol> Test the harvester_virtualmachine resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-virtualmachine-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-virtualmachine-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> Test the harvester_volume resource (e2e_be) https://harvester.github.io/tests/manual/terraform-provider/harvester-volume-resource/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraform-provider/harvester-volume-resource/ - Refer to the harvester_ssh_key resource test steps + <p>Refer to the harvester_ssh_key resource test steps</p> diff --git a/manual/terraformer/index.xml b/manual/terraformer/index.xml index ad674d43a..ebd5ba060 100644 --- a/manual/terraformer/index.xml +++ b/manual/terraformer/index.xml @@ -12,49 +12,49 @@ https://harvester.github.io/tests/manual/terraformer/harvester-cluster-communicate/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/harvester-cluster-communicate/ - Set the KUBECONFIG env variable with the path of your kubeconfig file Try to import any resource to test the connectivity with the Harvester cluster For instance, try to import ssh-key with: terraformer import harvester -r ssh_key Expected Results You should see: terraformer import harvester -r ssh_key 2021/08/04 15:18:59 harvester importing... ssh_key 2021/08/04 15:18:59 harvester done importing ssh_key ... And the generated files should appear in ./generated/harvester/ssh_key/ + <ol> <li>Set the KUBECONFIG env variable with the path of your kubeconfig file</li> <li>Try to import any resource to test the connectivity with the Harvester cluster For instance, try to import ssh-key with: <code>terraformer import harvester -r ssh_key</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <p>You should see:</p> <pre tabindex="0"><code>terraformer import harvester -r ssh_key 2021/08/04 15:18:59 harvester importing... ssh_key 2021/08/04 15:18:59 harvester done importing ssh_key ... </code></pr Import and make changes to clusternetwork resource https://harvester.github.io/tests/manual/terraformer/import-edit-clusternetwork/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-clusternetwork/ - Import clusternetwork resource terraformer import harvester -r clusternetwork Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: default_physical_nic, enable in the clusternetwork.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r clusternetwork 2021/08/04 15:43:25 harvester importing. + <ol> <li>Import clusternetwork resource</li> </ol> <pre tabindex="0"><code>terraformer import harvester -r clusternetwork </code></pre><ol> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: default_physical_nic, enable in the clusternetwork.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r clusternetwork 2021/08/04 15:43:25 harvester importing... clusternetwork 2021/08/04 15:43:26 harvester done importing clusternetwork ... </code></pr Import and make changes to image resource https://harvester.github.io/tests/manual/terraformer/import-edit-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-image/ - Import image resource terraformer import harvester -r image Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: description, display_name, name, namespace and url in the image.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r image 2021/08/04 16:14:52 harvester importing. + <ol> <li>Import image resource <code>terraformer import harvester -r image</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: description, display_name, name, namespace and url in the image.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r image 2021/08/04 16:14:52 harvester importing... image 2021/08/04 16:14:52 harvester done importing image ... </code></pr Import and make changes to network resource https://harvester.github.io/tests/manual/terraformer/import-edit-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-network/ - Import network resource terraformer import harvester -r network Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and vlan_id in the network.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r network 2021/08/04 16:14:08 harvester importing. + <ol> <li>Import network resource <code>terraformer import harvester -r network</code></li> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and vlan_id in the network.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r network 2021/08/04 16:14:08 harvester importing... network 2021/08/04 16:14:08 harvester done importing network ... </code></pr Import and make changes to ssh_key resource https://harvester.github.io/tests/manual/terraformer/import-edit-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-ssh-key/ - Import ssh_key resource terraformer import harvester -r ssh_key Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace and public_key in the ssh_key.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r ssh_key 2021/08/04 16:14:36 harvester importing. + <ol> <li>Import ssh_key resource <code>terraformer import harvester -r ssh_key</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply</li> <li>For instance, alter the following properties: name, namespace and public_key in the ssh_key.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r ssh_key 2021/08/04 16:14:36 harvester importing... ssh_key 2021/08/04 16:14:37 harvester done importing ssh_key ... </code></pr Import and make changes to virtual machine resource https://harvester.github.io/tests/manual/terraformer/import-edit-virtual-machine/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-virtual-machine/ - Import virtual machine resource terraformer import harvester -r virtualmachine Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: cpu, memory, name in the virtualmachine.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r virtualmachine 2021/08/04 16:15:08 harvester importing. + <ol> <li>Import virtual machine resource <code>terraformer import harvester -r virtualmachine</code></li> <li>Replace the provider (already explained in the installation process above)</li> <li>terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply</li> <li>For instance, alter the following properties: cpu, memory, name in the virtualmachine.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r virtualmachine 2021/08/04 16:15:08 harvester importing... virtualmachine 2021/08/04 16:15:09 harvester done importing virtualmachine ... </code></pr Import and make changes to volume resource https://harvester.github.io/tests/manual/terraformer/import-edit-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/terraformer/import-edit-volume/ - Import volume resource terraformer import harvester -r volume Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo; Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace in the volume.tf file Check the change through either the UI or the API Expected Results Import output terraformer import harvester -r volume 2021/08/04 16:15:29 harvester importing. + <ol> <li>Import volume resource <code>terraformer import harvester -r volume</code></li> <li>Replace the provider (already explained in the installation process above) terraform plan and apply command should print &ldquo;No changes.&rdquo;</li> <li>Alter the resource and check with terraform plan then terraform apply For instance, alter the following properties: name, namespace in the volume.tf file</li> <li>Check the change through either the UI or the API</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Import output</li> </ul> <pre tabindex="0"><code>terraformer import harvester -r volume 2021/08/04 16:15:29 harvester importing... volume 2021/08/04 16:15:29 harvester done importing volume ... </code></pr diff --git a/manual/ui/index.xml b/manual/ui/index.xml index 9bbe36d29..e59a3376f 100644 --- a/manual/ui/index.xml +++ b/manual/ui/index.xml @@ -12,28 +12,28 @@ https://harvester.github.io/tests/manual/ui/verify-bottom-links/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-bottom-links/ - Click all the external links available at the bottom of the page - Docs, Forums, Slack, File an issue. Click the Generate support bundle at the bottom of the page Expected Results The external links should take user to correct URL in new tab in the browser. The support bundle should be generated once the Generate support bundle. The progress should be shown while the bundle is getting generated. The Generated bundle should have all components logs and Yaml + <ol> <li>Click all the external links available at the bottom of the page - Docs, Forums, Slack, File an issue.</li> <li>Click the Generate support bundle at the bottom of the page</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The external links should take user to correct URL in new tab in the browser.</li> <li>The support bundle should be generated once the Generate support bundle. The progress should be shown while the bundle is getting generated.</li> <li>The Generated bundle should have all components logs and Yaml</li> </ol> Verify the Harvester UI URL (e2e_fe) https://harvester.github.io/tests/manual/ui/verify-url/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-url/ - Navigate to the Harvester UI and verify the URL. Verify the Harvester icon on the left top corner Expected Results The URL should be the management ip + /dashboard redirect to login page if not login redirect to dashboard page if already login + <ol> <li>Navigate to the Harvester UI and verify the URL.</li> <li>Verify the Harvester icon on the left top corner</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The URL should be the management ip + /dashboard</li> <li>redirect to login page if not login</li> <li>redirect to dashboard page if already login</li> </ol> Verify the left side menu (e2e_fe) https://harvester.github.io/tests/manual/ui/verify-left-menu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-left-menu/ - Check all the menu at the left side of the screen. Verify the preference and logout option is available at the right top of the screen Expected Results The menu should have Dashboard, Hosts, Virtual machines, Volumes, Images and Advance. The Advance menu should have sub menu Templates, backups, network, SSH keys, Users, Cloud config templates, Settings. Clicking on the menu should take user to the respective pages + <ol> <li>Check all the menu at the left side of the screen.</li> <li>Verify the preference and logout option is available at the right top of the screen</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The menu should have Dashboard, Hosts, Virtual machines, Volumes, Images and Advance.</li> <li>The Advance menu should have sub menu Templates, backups, network, SSH keys, Users, Cloud config templates, Settings.</li> <li>Clicking on the menu should take user to the respective pages</li> </ol> Verify the links which navigate to the internal pages https://harvester.github.io/tests/manual/ui/verify-internal-links/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/ui/verify-internal-links/ - Click the links available on the pages like on dashboard - host, virtual machines etc Verify the events and resources tabs presents in the pages e.g. - Dashboard, Virtual machines Expected Results The internal link should take user to the correct page in the same tab opened in the browser + <ol> <li>Click the links available on the pages like on dashboard - host, virtual machines etc</li> <li>Verify the events and resources tabs presents in the pages e.g. - Dashboard, Virtual machines</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The internal link should take user to the correct page in the same tab opened in the browser</li> </ol> diff --git a/manual/upgrade/index.xml b/manual/upgrade/index.xml index 93582b519..0e91d188e 100644 --- a/manual/upgrade/index.xml +++ b/manual/upgrade/index.xml @@ -12,49 +12,49 @@ https://harvester.github.io/tests/manual/upgrade/rejoin-node-after-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/rejoin-node-after-upgrade/ - Related issues: #2655 [BUG] reinstall 1st node Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk Verification Steps Create a 3 nodes v1. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2655">#2655</a> [BUG] reinstall 1st node</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Create a 3 nodes v1.0.3 Harvester cluster.</p> Upgrade Harvester from new cluster network design (after v1.1.0) https://harvester.github.io/tests/manual/upgrade/upgrade-from-new-network-design/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-from-new-network-design/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to available resource Virtual Machine environment setup Clone ipxe-example https://github. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <p>We can select VM or Bare machine network setup according to available resource</p> Upgrade Harvester from traditonal cluster network design (before v1.1.0) https://harvester.github.io/tests/manual/upgrade/upgrade-from-traditional-network-design/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-from-traditional-network-design/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to available resource + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <p>We can select VM or Bare machine network setup according to available resource</p> Upgrade Harvester in Fully Airgapped Environment https://harvester.github.io/tests/manual/upgrade/fully-airgapped-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/fully-airgapped-upgrade/ - Category: Upgrade Harvester Environment requirement Airgapped Network without internet connectivity Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Suggest not to use SMR type HDD disk We can select VM or Bare machine network setup according to your available resource Virtual Machine environment setup Clone ipxe-example https://github. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Airgapped Network without internet connectivity</li> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Suggest not to use SMR type HDD disk</li> </ol> <h4 id="we-can-select-vm-or-bare-machine-network-setup-according-to-your-available-resource">We can select VM or Bare machine network setup according to your available resource</h4> <h2 id="virtual-machine-environment-setup">Virtual Machine environment setup</h2> <ol> <li>Clone ipxe-example <a href="https://github.com/harvester/ipxe-examples">https://github.com/harvester/ipxe-examples</a></li> <li>Switch to v1.0 or main branch</li> <li>Edit Vagrantfile, add a new network interface of <code>pxe_server.vm.network</code></li> <li>Set the <code>pxe_server.vm.network</code> bond to correct <code>libvirt</code> network</li> <li>Add two additional new network interface of <code>harvester_node.vm.network</code></li> <li>Edit the settings.yml, set <code>harvester_network_config.offline: true</code></li> <li>Use ipxe-example to provision a multi nodes Harvester cluster</li> <li>Run <code>varant ssh pxe_server</code> to access pxe server</li> <li>Edit the <code>dhcpd.conf</code>, let pxe server can create a vlan and assign IP to it</li> </ol> <h2 id="bare-machine-environment-setup">Bare machine environment setup</h2> <ol> <li>Ensure your switch router have setup VLAN network</li> <li>Setup the VLAN connectivity to your Router/Gateway device</li> <li>Disable internet connectivity on Router</li> <li>Provision a multi nodes Harvester cluster</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>For <code>VLAN 1</code> testing, enable network on settings, select <code>harvester-mgmt</code></p> Upgrade Harvester with bonded NICs on network https://harvester.github.io/tests/manual/upgrade/bonded-nics-traditional-network-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/bonded-nics-traditional-network-upgrade/ - Related issues: #3047 [BUG] migrate_harv_mgmt_to_mgmt_br.sh should remove ClusterNetwork resource Category: Upgrade Harvester Environment setup from v1.0.3 upgrade to v1.1.1 Clone ipxe-example and switch to v1.0 branch Add three additional Network interface in Vagrantfile harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39;, mac: @settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][node_number][&#39;mac&#39;] harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; Edit the config-create.yaml.j2 and config-join.yaml.j2 in /ansible/role/harvester/template/ Add the cluster_network and defaultPysicalNIC to harvester-mgmt cluster_networks: vlan: enable: true description: &#34;some description about this cluster network&#34; config: defaultPhysicalNIC: harvester-mgmt Bond multiple NICs on harvester-mgmt and harvester-vlan networks networks: harvester-mgmt: interfaces: - name: {{ settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][0][&#39;mgmt_interface&#39;] }} # The management interface name - name: ens9 method: dhcp bond0: interfaces: - name: {{ settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][0][&#39;vagrant_interface&#39;] }} method: dhcp harvester-vlan: interfaces: - name: ens7 - name: ens8 method: none Verification Steps Provision previous version of Harvester cluster + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/3047">#3047</a> [BUG] migrate_harv_mgmt_to_mgmt_br.sh should remove ClusterNetwork resource</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-setup-from-v103-upgrade-to-v111">Environment setup from v1.0.3 upgrade to v1.1.1</h2> <ol> <li>Clone ipxe-example and switch to <code>v1.0</code> branch</li> <li>Add three additional Network interface in <code>Vagrantfile</code> <pre tabindex="0"><code> harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39;, mac: @settings[&#39;harvester_network_config&#39;][&#39;cluster&#39;][node_number][&#39;mac&#39;] harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; harvester_node.vm.network &#39;private_network&#39;, libvirt__network_name: &#39;harvester&#39; </code></pr Upgrade Harvester with HDD Disks https://harvester.github.io/tests/manual/upgrade/upgrade-with-hdd-disk/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/upgrade-with-hdd-disk/ - Category: Upgrade Harvester Environment requirement Network environment has available VLAN id setup on DHCP server DHCP server has setup the IP range can allocate to above VLAN id Harvester node can route to DHCP server through VLAN id to retrieve IP address Network has at least two NICs Use HDD disk with SMR type or slow I/O speed n1-103:~ # smartctl -a /dev/sda smartctl 7.2 2021-09-14 r5237... === START OF INFORMATION SECTION === Model Family: Seagate BarraCuda 3. + <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-requirement">Environment requirement</h2> <ol> <li>Network environment has available VLAN id setup on DHCP server</li> <li>DHCP server has setup the IP range can allocate to above VLAN id</li> <li>Harvester node can route to DHCP server through VLAN id to retrieve IP address</li> <li>Network has at least two NICs</li> <li>Use HDD disk with SMR type or slow I/O speed <pre tabindex="0"><code>n1-103:~ # smartctl -a /dev/sda smartctl 7.2 2021-09-14 r5237... === START OF INFORMATION SECTION === Model Family: Seagate BarraCuda 3.5 (SMR) </code></pr Upgrade Harvester with IPv6 DHCP https://harvester.github.io/tests/manual/upgrade/ipv6-dhcp-upgrade/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/upgrade/ipv6-dhcp-upgrade/ - Related issues: #2962 [BUG] Host IP is inconsistent Category: Upgrade Harvester Environment setup Open the virtual machine manager Open the Connection Details -&gt; Virtual Networks Create a new virtual network workload Add the following XML content &lt;network&gt; &lt;name&gt;workload&lt;/name&gt; &lt;uuid&gt;ac62e6bf-6869-41a9-a2b7-25c06c7601c9&lt;/uuid&gt; &lt;forward mode=&#34;nat&#34;&gt; &lt;nat&gt; &lt;port start=&#34;1024&#34; end=&#34;65535&#34;/&gt; &lt;/nat&gt; &lt;/forward&gt; &lt;bridge name=&#34;virbr5&#34; stp=&#34;on&#34; delay=&#34;0&#34;/&gt; &lt;mac address=&#34;52:54:00:7b:ed:99&#34;/&gt; &lt;domain name=&#34;workload&#34;/&gt; &lt;ip address=&#34;192.168.101.1&#34; netmask=&#34;255.255.255.0&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;192.168.101.128&#34; end=&#34;192.168.101.254&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;ip family=&#34;ipv6&#34; address=&#34;fd7d:844d:3e17:f3ae::1&#34; prefix=&#34;64&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;fd7d:844d:3e17:f3ae::100&#34; end=&#34;fd7d:844d:3e17:f3ae::1ff&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;/network&gt; Change the bridge name to a new one + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/2962">#2962</a> [BUG] Host IP is inconsistent</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Upgrade Harvester</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li> <p>Open the virtual machine manager</p> </li> <li> <p>Open the Connection Details -&gt; Virtual Networks</p> </li> <li> <p>Create a new virtual network <code>workload</code></p> </li> <li> <p>Add the following XML content</p> <pre tabindex="0"><code>&lt;network&gt; &lt;name&gt;workload&lt;/name&gt; &lt;uuid&gt;ac62e6bf-6869-41a9-a2b7-25c06c7601c9&lt;/uuid&gt; &lt;forward mode=&#34;nat&#34;&gt; &lt;nat&gt; &lt;port start=&#34;1024&#34; end=&#34;65535&#34;/&gt; &lt;/nat&gt; &lt;/forward&gt; &lt;bridge name=&#34;virbr5&#34; stp=&#34;on&#34; delay=&#34;0&#34;/&gt; &lt;mac address=&#34;52:54:00:7b:ed:99&#34;/&gt; &lt;domain name=&#34;workload&#34;/&gt; &lt;ip address=&#34;192.168.101.1&#34; netmask=&#34;255.255.255.0&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;192.168.101.128&#34; end=&#34;192.168.101.254&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;ip family=&#34;ipv6&#34; address=&#34;fd7d:844d:3e17:f3ae::1&#34; prefix=&#34;64&#34;&gt; &lt;dhcp&gt; &lt;range start=&#34;fd7d:844d:3e17:f3ae::100&#34; end=&#34;fd7d:844d:3e17:f3ae::1ff&#34;/&gt; &lt;/dhcp&gt; &lt;/ip&gt; &lt;/network&gt; </code></pr diff --git a/manual/virtual-machines/index.xml b/manual/virtual-machines/index.xml index 5a05300e9..26b22764d 100644 --- a/manual/virtual-machines/index.xml +++ b/manual/virtual-machines/index.xml @@ -12,658 +12,658 @@ https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-only-1-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-only-1-network/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> Add a network to an existing VM with two networks https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-two-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/add-a-network-to-an-existing-vm-with-two-networks/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> Chain VM templates and images https://harvester.github.io/tests/manual/virtual-machines/760-chained-vm-templates/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/760-chained-vm-templates/ - Related issues: #760 cloud config byte limit Verification Steps Create a vm and add userData or networkData, test if it works Run VM health checks create a vm template and add userData create a new vm and use the template Run VM health checks use the existing vm to generate a template, then use the template to create a new vm Run VM health Checks Expected Results All VM&rsquo;s should create All VM Health Checks should pass + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/760">#760</a> cloud config byte limit</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a vm and add userData or networkData, test if it works</li> <li>Run VM health checks</li> <li>create a vm template and add userData create a new vm and use the template</li> <li>Run VM health checks</li> <li>use the existing vm to generate a template, then use the template to create a new vm</li> <li>Run VM health Checks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All VM&rsquo;s should create</li> <li>All VM Health Checks should pass</li> </ol> Check VM creation required-fields https://harvester.github.io/tests/manual/virtual-machines/1283-vm-creation-required-fields/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1283-vm-creation-required-fields/ - Related issues: #1283 Fix required fields on VM creation page Verification Steps Create VM without image name and size Create VM without size Create VM wihout image name Create VM without hostname Expected Results You should get an error trying to create VM without image name and size You should get an error trying to create VM without image name You should get an error trying to create VM without size You should not get an error trying to create a VM without hostname + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1283">#1283</a> Fix required fields on VM creation page</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create VM without image name and size</li> <li>Create VM without size</li> <li>Create VM wihout image name</li> <li>Create VM without hostname</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get an error trying to create VM without image name and size</li> <li>You should get an error trying to create VM without image name</li> <li>You should get an error trying to create VM without size</li> <li>You should not get an error trying to create a VM without hostname</li> </ol> Clone VM and don't select start after creation https://harvester.github.io/tests/manual/virtual-machines/clone-vm-and-dont-select-start-after-creation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-and-dont-select-start-after-creation/ - Case 1 Clone VM from Virtual Machine list and don&rsquo;t select start after creation Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list and don&rsquo;t select start after creation Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list and don&rsquo;t select start after creation</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine</li> <li>in Config</li> <li>In YAML</li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list and don&rsquo;t select start after creation</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine</li> <li>in Config</li> <li>In YAML</li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that is turned off https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-off/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-off/ - Case 1 Clone VM from Virtual Machine list that is turned off Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that is turned off Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that is turned off</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that is turned off</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that is turned on https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-on/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-is-turned-on/ - Case 1 Clone VM from Virtual Machine list that is turned on Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that is turned on Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that is turned on</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that is turned on</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from existing volume https://harvester.github.io/tests/manual/virtual-machines/clone-vm-existing-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-existing-volume/ - Case 1 Clone VM from Virtual Machine list that was created from existing volume Expected Results When completing the clone you should get an error that the volume is already in use Case 2 Clone VM with volume from Virtual Machine list that was created from existing volume Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test.txt &amp;&amp; sync Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console file test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from existing volume</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>When completing the clone you should get an error that the volume is already in use</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from existing volume</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from image https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-image/ - Case 1 Clone VM from Virtual Machine list that was created from image Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was created from image Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from image</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from image</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was created from template https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-created-from-template/ - Case 1 Clone VM from Virtual Machine list that was created from template Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was created from template Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was created from template</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was created from template</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> Clone VM that was not created from image https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-not-created-from-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/clone-vm-that-was-not-created-from-image/ - Case 1 Clone VM from Virtual Machine list that was not created from image Expected Results Machine should start if start VM after creation was checked Machine should match the origin machine in Config In YAML You should be able to connect to new VM via console Case 2 Clone VM with volume from Virtual Machine list that was not created from image Expected Results Before cloning machine create file run command echo &quot;123&quot; &gt; test. + <h2 id="case-1">Case 1</h2> <ol> <li>Clone VM from Virtual Machine list that was not created from image</li> </ol> <h3 id="expected-results">Expected Results</h3> <ol> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> </ol> <h2 id="case-2">Case 2</h2> <ol> <li>Clone VM with volume from Virtual Machine list that was not created from image</li> </ol> <h3 id="expected-results-1">Expected Results</h3> <ol> <li>Before cloning machine create file run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code></li> <li>Machine should start if start VM after creation was checked</li> <li>Machine should match the origin machine <ul> <li>in Config</li> <li>In YAML</li> </ul> </li> <li>You should be able to connect to new VM via console</li> <li>file <code>test.txt</code> should exist</li> </ol> CPU overcommit on VM (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/cpu_overcommit/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/cpu_overcommit/ - Ref: https://github.com/harvester/harvester/issues/1429 Verify Items Overcommit can be edit on Dashboard VM can allocate exceed CPU on the host Node VM can chage allocated CPU after created Case: Update Overcommit configuration Install Harvester with any Node Login to Dashboard, then navigate to Advanced Settings Edit overcommit-config The field of CPU should be editable Created VM can allocate maximum CPU should be &lt;HostCPUs&gt; * [&lt;overcommit-CPU&gt;/100] - &lt;Host Reserved&gt; Case: VM can allocate CPUs more than Host have Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostCPUs&gt; * 5 CPUs VM should start successfully lscpu in VM should display allocated CPUs Page of Virtual Machines should display allocated CPUs correctly Case: Update VM allocated CPUs Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostCPUs&gt; * 5 CPUs VM should start successfully Increase/Reduce VM allocated CPUs to minimum/maximum VM should start successfully lscpu in VM should display allocated CPUs Page of Virtual Machines should display allocated CPUs correctly + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1429">https://github.com/harvester/harvester/issues/1429</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Overcommit can be edit on Dashboard</li> <li>VM can allocate exceed CPU on the host Node</li> <li>VM can chage allocated CPU after created</li> </ul> <h2 id="case-update-overcommit-configuration">Case: Update Overcommit configuration</h2> <ol> <li>Install Harvester with any Node</li> <li>Login to Dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <code>overcommit-config</code></li> <li>The field of <strong>CPU</strong> should be editable</li> <li>Created VM can allocate maximum CPU should be <code>&lt;HostCPUs&gt; * [&lt;overcommit-CPU&gt;/100] - &lt;Host Reserved&gt;</code></li> </ol> <h2 id="case-vm-can-allocate-cpus-more-than-host-have">Case: VM can allocate CPUs more than Host have</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostCPUs&gt; * 5</code> CPUs</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated CPUs</li> <li>Page of Virtual Machines should display allocated CPUs correctly</li> </ol> <h2 id="case-update-vm-allocated-cpus">Case: Update VM allocated CPUs</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostCPUs&gt; * 5</code> CPUs</li> <li>VM should start successfully</li> <li>Increase/Reduce VM allocated CPUs to minimum/maximum</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated CPUs</li> <li>Page of Virtual Machines should display allocated CPUs correctly</li> </ol> Create a new VM and add Enable USB tablet option (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-enable-usb-tablet-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-enable-usb-tablet-option/ - Add Enable usb tablet Option Save/Create VM Expected Results Machine starts successfully Enable usb tablet shows In YAML In Form + <ol> <li>Add Enable usb tablet Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Enable usb tablet shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> </ol> Create a new VM and add Install guest agent option (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-install-guest-agent-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-and-add-install-guest-agent-option/ - Add install Guest Agent Option Save/Create VM Validate that qemu-guest-agent was installed You can do this on ubuntu with the command dpkg -l | grep qemu Expected Results Machine starts successfully Guest Agent Option shows In YAML In Form Guest Agent is installed + <ol> <li>Add install Guest Agent Option</li> <li>Save/Create VM</li> <li>Validate that qemu-guest-agent was installed <ul> <li>You can do this on ubuntu with the command <code>dpkg -l | grep qemu</code></li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Guest Agent Option shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Guest Agent is installed</li> </ol> Create a new VM with Network Data from the form (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-the-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-the-form/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li> <p>Add Network Data to the VM</p> <ul> <li> <p>Here is an example of Network Data config to add DHCP to the physical interface eth0</p> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> </ul> </li> <li> <p>Save/Create the VM</p> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Create a new VM with Network Data from YAML (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-network-data-from-yaml/ - Add Network Data to the VM via YAML Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM via YAML <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0 <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> </ul> </li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Create a new VM with User Data from the form https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-user-data-from-the-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-new-vm-with-user-data-from-the-form/ - Add User data to the VM Here is an example of user data config to add a password #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should exist In YAML In Form Machine should have user password set + <ol> <li>Add User data to the VM</li> </ol> <ul> <li>Here is an example of user data config to add a password <code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM</code></li> </ul> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should exist <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Create a VM on a VLAN with an existing machine and then change the existing machine's VLAN https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-on-a-vlan-with-an-existing-machine-and-then-change-the-existing-machines-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-on-a-vlan-with-an-existing-machine-and-then-change-the-existing-machines-vlan/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create a VM with 2 networks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-2-networks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-2-networks/ - Add a network to the VM Save the VM Wait for it to start/restart Expected Results the VM should start successfully The already existing network connectivity should still work The new connectivity should also work Comments one default management network and one VLAN + <ol> <li>Add a network to the VM</li> <li>Save the VM</li> <li>Wait for it to start/restart</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>the VM should start successfully</li> <li>The already existing network connectivity should still work</li> <li>The new connectivity should also work</li> </ol> <h3 id="comments">Comments</h3> <p>one default management network and one VLAN</p> Create a vm with all the default values (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-all-the-default-values/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-all-the-default-values/ - Create a VM with all default values Save Expected Results VM should save VM should start if start after creation checkbox is checked Config should show In Form In YAML + <ol> <li>Create a VM with all default values</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should start if start after creation checkbox is checked</li> <li>Config should show <ul> <li>In Form</li> <li>In YAML</li> </ul> </li> </ol> Create a VM with Start VM on Creation checked (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-checked/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-checked/ - Create VM Expected Results VM should start Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation + <ol> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start</li> <li>Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation</li> </ol> Create a VM with start VM on creation unchecked (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-unchecked/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-a-vm-with-start-vm-on-creation-unchecked/ - Create VM Expected Results VM should start or not start as appropriate Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation + <ol> <li>Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start or not start as appropriate</li> <li>Checkbox for start virtual machine on creation should show as appropriate while editing machine after creation</li> </ol> Create multiple instances of the vm with ISO image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-iso-image/ - Create images using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create multiple instances of the vm with raw image (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-raw-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-raw-image/ - Create images using the external path for cloud image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start. Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for cloud image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create multiple instances of the vm with Windows Image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-windows-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-multiple-instances-vm-with-windows-image/ - Create images using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the 3 vms and wait for vm to start. Expected Results 3 vm should come up and start with same config. Observe the time taken for the system to start the vms. Observe the pattern of the vms get allocated on the nodes. + <ol> <li>Create images using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> <li></li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the 3 vms and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>3 vm should come up and start with same config.</li> <li>Observe the time taken for the system to start the vms.</li> <li>Observe the pattern of the vms get allocated on the nodes. Like how many vm on each nodes are created. Is there a pattern?</li> </ol> Create new VM with a machine type of PC (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-pc/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Create new VM with a machine type of q35 (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-new-vm-with-a-machine-type-q35/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Create one VM on a VLAN and then move another VM to that VLAN https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-and-then-move-another-vm-to-that-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-and-then-move-another-vm-to-that-vlan/ - Create/edit VM/VMs with the appropriate VLAN Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should be able to connect on network This can be verified with a ping over the IP, or via other options if ICMP is disabled + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should be able to connect on network <ul> <li>This can be verified with a ping over the IP, or via other options if ICMP is disabled</li> </ul> </li> </ol> Create one VM on a VLAN that has other VMs then change it to a different VLAN https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-that-has-other-vms-then-change-it-to-a-different-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-one-vm-on-a-vlan-that-has-other-vms-then-change-it-to-a-different-vlan/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create Single instances of the vm with ISO image https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-pc/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with ISO image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with ISO image with machine type pc https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-iso-image-with-machine-type-q35/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with raw image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-raw-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-raw-image/ - Create vm using the external path for cloud image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for cloud image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create Single instances of the vm with Windows Image (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-windows-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-single-instances-vm-with-windows-image/ - Create vm using the external path for ISO image. In user data mention the below to access the vm. #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Create the vm and wait for vm to start. Expected Results VM should come up and start with same config. + <ol> <li>Create vm using the external path for ISO image.</li> <li>In user data mention the below to access the vm.</li> </ol> <pre tabindex="0"><code>#cloud-config password: password chpasswd: {expire: False} sshpwauth: True </code></pre><ol> <li>Create the vm and wait for vm to start.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start with same config.</li> </ol> Create two VMs in the same VLAN (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-in-the-same-vlan/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-in-the-same-vlan/ - Create/edit VM/VMs with the appropriate VLAN Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should be able to connect on network This can be verified with a ping over the IP, or via other options if ICMP is disabled + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should be able to connect on network <ul> <li>This can be verified with a ping over the IP, or via other options if ICMP is disabled</li> </ul> </li> </ol> Create two VMs on separate VLANs https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-separate-vlans/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-separate-vlans/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create two VMs on the same VLAN and change one https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-the-same-vlan-and-change-one/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-two-vms-on-the-same-vlan-and-change-one/ - Create/edit VM/VMs with the appropriate VLAN Change VLAN for VM if appropriate Expected Results VM should create successfully Appropriate VLAN should show In config in YAML VMs should NOT be able to connect on network verify with ping/ICMP verify with SSH verify with telnet over port 80 if there&rsquo;s a web server + <ol> <li>Create/edit VM/VMs with the appropriate VLAN</li> <li>Change VLAN for VM if appropriate</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create successfully</li> <li>Appropriate VLAN should show <ul> <li>In config</li> <li>in YAML</li> </ul> </li> <li>VMs should NOT be able to connect on network <ul> <li>verify with ping/ICMP</li> <li>verify with SSH</li> <li>verify with telnet over port 80 if there&rsquo;s a web server</li> </ul> </li> </ol> Create VM and add SSH key (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-and-add-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-and-add-ssh-key/ - Create VM Add SSH key if not already in VM Logon with SSH Expected Results You should be prompted for SSH key passphrase if appropriate You should connect You should be able to execute shell commands The SSH Key should show in the SSH key list + <ol> <li>Create VM</li> <li>Add SSH key if not already in VM</li> <li>Logon with SSH</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be prompted for SSH key passphrase if appropriate</li> <li>You should connect</li> <li>You should be able to execute shell commands</li> <li>The SSH Key should show in the SSH key list</li> </ol> Create vm using a template of default version https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of default version with machine type pc https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-pc/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-pc/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of default version with machine type q35 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-q35/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-of-default-version-with-machine-type-q35/ - Create a new VM with a template of default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info VM should start after creation if Start Virtual Machine is selected + <ol> <li>Create a new VM with a template of default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> <li>VM should start after creation if <code>Start Virtual Machine</code> is selected</li> </ol> Create vm using a template of non-default version (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-non-default-version/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-using-a-template-non-default-version/ - Create a new VM with a template of non-default version Expected Results After selecting appropriate template and/or version it should populate other fields CPU, Memory, Image, and SSH key should match saved template info + <ol> <li>Create a new VM with a template of non-default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>After selecting appropriate template and/or version it should populate other fields</li> <li>CPU, Memory, Image, and SSH key should match saved template info</li> </ol> Create vm with both CPU and Memory not in cluster (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-both-cpu-and-memory-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-both-cpu-and-memory-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned The VM should not create until errors are resolved</li> </ol> Create vm with CPU not in cluster. (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-cpu-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-cpu-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned The VM should not create until errors are resolved</li> </ol> Create VM with existing Volume (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-existing-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-existing-volume/ - Create VM with an existing volume Expected Results VM should create and start You should be able to open the console for the VM and see it boot Volume should show in volumes list VM should appear to the &ldquo;Attached VM&rdquo; column of the existing volume + <ol> <li>Create VM with an existing volume</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create and start</li> <li>You should be able to open the console for the VM and see it boot</li> <li>Volume should show in volumes list</li> <li>VM should appear to the &ldquo;Attached VM&rdquo; column of the existing volume</li> </ol> Create vm with Memory not in cluster. (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-memory-not-in-cluster/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-memory-not-in-cluster/ - Attempt to create a VM with the appropriate resources Expected Results You should get errors for each resource you over provisioned The VM should not create until errors are resolved + <ol> <li>Attempt to create a VM with the appropriate resources</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should get errors for each resource you over provisioned</li> <li>The VM should not create until errors are resolved</li> </ol> Create VM with resources that are only on one node in cluster CPU https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster CPU (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster CPU and Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster Memory https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with resources that are only on one node in cluster Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-resources-that-are-only-on-one-node-in-cluster-memory/ - Create a VM with resources that are only available on one node in cluster Expected Results VM should create VM should be assigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Create a VM with resources that are only available on one node in cluster</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should be assigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Create VM with saved SSH key (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-saved-ssh-key/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-saved-ssh-key/ - Create VM Add SSH key if not already in VM Logon with SSH Expected Results You should be prompted for SSH key passphrase if appropriate You should connect You should be able to execute shell commands The SSH Key should show in the SSH key list + <ol> <li>Create VM</li> <li>Add SSH key if not already in VM</li> <li>Logon with SSH</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be prompted for SSH key passphrase if appropriate</li> <li>You should connect</li> <li>You should be able to execute shell commands</li> <li>The SSH Key should show in the SSH key list</li> </ol> Create VM with the default network (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-the-default-network/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-the-default-network/ - Create a VM with the default network Let VM boot up after creation Expected Results VM should start VM should be able to ping other machines in the VLAN VM should be able to ping servers on the internet if the VLAN has external access + <ol> <li>Create a VM with the default network</li> <li>Let VM boot up after creation</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should start</li> <li>VM should be able to ping other machines in the VLAN</li> <li>VM should be able to ping servers on the internet if the VLAN has external access</li> </ol> Create VM with two disk volumes (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-two-disk-volumes/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-with-two-disk-volumes/ - Create a VM with the appropriate number of volumes Expected Results Verify after creation that the appropriate volumes are in the config for the VM Verify that the volumes are created and listed in the volumes section + <ol> <li>Create a VM with the appropriate number of volumes</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Verify after creation that the appropriate volumes are in the config for the VM</li> <li>Verify that the volumes are created and listed in the volumes section</li> </ol> Create VM without memory provided (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/create-vm-without-memory-provided/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-vm-without-memory-provided/ - Related issues: #1477 intimidating error message when missing mandatory field Category Virtual Machine Verification Steps Create some image and volume Create virtual machine Fill out all mandatory field but leave memory blank. Click create Expected Results Leave empty memory field empty when create virtual machine will show &ldquo;Memory is required&rdquo; error message + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1477">#1477</a> intimidating error message when missing mandatory field</li> </ul> <h2 id="category">Category</h2> <ul> <li>Virtual Machine</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create some image and volume</li> <li>Create virtual machine</li> <li>Fill out all mandatory field but leave memory blank.</li> <li>Click create</li> </ol> <h2 id="expected-results">Expected Results</h2> <p>Leave empty memory field empty when create virtual machine will show &ldquo;Memory is required&rdquo; error message</p> <p><img src="https://user-images.githubusercontent.com/29251855/140006054-92b12a07-af8b-4087-9fc8-4cf76c6500ea.png" alt="image"></p> Create Windows VM https://harvester.github.io/tests/manual/virtual-machines/create-windows-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/create-windows-vm/ - Create a VM with the VM template with windows-iso-image-base-temp Config the CPU and Memory to 4 and 8 respectively Select the windows ISO image Click the Volumes tab and update the root disk size to 50GB Click create to launch the windows VM Optional: you can increase the second disk size or add an additional one. Click create to launch the VM (this will take a couple of minutes upon your network speed of download the ISO image) Click the Console to launch a VNC console of the windows server, and you will need to find an evaluation key of the windows server 2012 installation. + <ol> <li>Create a VM with the VM template with windows-iso-image-base-temp</li> <li>Config the CPU and Memory to 4 and 8 respectively</li> <li>Select the windows ISO image</li> <li>Click the Volumes tab and update the root disk size to 50GB</li> <li>Click create to launch the windows VM</li> <li>Optional: you can increase the second disk size or add an additional one.</li> <li>Click create to launch the VM (this will take a couple of minutes upon your network speed of download the ISO image)</li> <li>Click the Console to launch a VNC console of the windows server, and you will need to find an evaluation key of the windows server 2012 installation.</li> <li>Optional: you may continue to create other VMs as described in the below doc to skip the image downloading and installation times.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should come up and start.</li> </ol> Delete multiple VMs with disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-with-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-with-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete multiple VMs without disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-without-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-multiple-vms-without-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete single vm all disks (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/delete-single-vm-all-disks/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/delete-single-vm-all-disks/ - Delete VM Select whether you want to delete disks Expected Results You should check amount of used space on Server before you delete the VM Machine should delete It should not show up in the Virtual Machine list Disks should be listed/or not in Volumes list as appropriate Verify the cleaned up the space on the disk on the node. + <ol> <li>Delete VM</li> <li>Select whether you want to delete disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should check amount of used space on Server before you delete the VM</li> <li>Machine should delete</li> <li>It should not show up in the Virtual Machine list</li> <li>Disks should be listed/or not in Volumes list as appropriate</li> <li>Verify the cleaned up the space on the disk on the node.</li> </ol> Delete VM Negative (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/negative-delete-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-delete-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Delete VM and all disks Expected Results You should not be able to delete the VM + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Delete VM and all disks</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to delete the VM</li> </ol> Delete VM with exported image (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/1602-delete-vm-with-exported-image/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1602-delete-vm-with-exported-image/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; delete image &ldquo;img-1&rdquo; Expected Results image &ldquo;img-1&rdquo; will be deleted + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>delete image &ldquo;img-1&rdquo;</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be deleted</li> </ol> Edit a VM and add install Enable usb tablet option (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-enable-usb-tablet-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-enable-usb-tablet-option/ - Add Enable usb tablet Option Save/Create VM Expected Results Machine starts successfully Enable usb tablet shows In YAML In Form + <ol> <li>Add Enable usb tablet Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Enable usb tablet shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> </ol> Edit a VM and add install guest agent option (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-guest-agent-option/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-and-add-install-guest-agent-option/ - Add install Guest Agent Option Save/Create VM Expected Results Machine starts successfully Guest Agent Option shows In YAML In Form Guest Agent is installed + <ol> <li>Add install Guest Agent Option</li> <li>Save/Create VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts successfully</li> <li>Guest Agent Option shows <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Guest Agent is installed</li> </ol> Edit a VM from the form to add Network Data https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-network-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-network-data/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0</li> </ul> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Edit a VM from the form to add user data (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-form-to-add-user-data/ - Add User data to the VM Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should In YAML In Form Machine should have user password set + <ol> <li> <p>Add User data to the VM</p> <ul> <li>Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True</li> </ul> <pre tabindex="0"><code></code></pre></li> <li> <p>Save/Create the VM</p> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Edit a VM from the YAML to add Network Data (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-network-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-network-data/ - Add Network Data to the VM Here is an example of Network Data config to add DHCP to the physical interface eth0 network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp Save/Create the VM Expected Results Machine starts succesfully Network Data should show in YAML Network Datashould show in Form Machine should have DHCP for network on eth0 + <ol> <li>Add Network Data to the VM <ul> <li>Here is an example of Network Data config to add DHCP to the physical interface eth0</li> </ul> <pre tabindex="0"><code>network: version: 1 config: - type: physical name: eth0 subnets: - type: dhcp </code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>Network Data should show in YAML</li> <li>Network Datashould show in Form</li> <li>Machine should have DHCP for network on eth0</li> </ol> Edit a VM from the YAML to add user data (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-user-data/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-from-the-yaml-to-add-user-data/ - Add User data to the VM Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True Save/Create the VM Expected Results Machine starts succesfully User data should In YAML In Form Machine should have user password set + <ol> <li>Add User data to the VM <ul> <li>Here is an example of user data config to add a password `` #cloud-config password: password chpasswd: {expire: False} sshpwauth: True</li> </ul> <pre tabindex="0"><code></code></pre></li> <li>Save/Create the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine starts succesfully</li> <li>User data should <ul> <li>In YAML</li> <li>In Form</li> </ul> </li> <li>Machine should have user password set</li> </ol> Edit an existing VM to another machine type (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-an-existing-vm-to-another-machine-type/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-an-existing-vm-to-another-machine-type/ - Set up the VM with the appropriate machine type Save/create Expected Results Machine should start sucessfully Machine should show the new machine type in the config and in the YAML + <ol> <li>Set up the VM with the appropriate machine type</li> <li>Save/create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Machine should start sucessfully</li> <li>Machine should show the new machine type in the config and in the YAML</li> </ol> Edit vm and insert ssh and check the ssh key is accepted for the login (e2e_be_fe) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-and-insert-ssh-and-check-the-ssh-key-is-accepted-for-the-login/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-and-insert-ssh-and-check-the-ssh-key-is-accepted-for-the-login/ - Edit VM and add SSH Key Save VM Expected Results You should be able to ssh in with correct SSH private key You should not be able to SSH in with incorrect SSH private key + <ol> <li>Edit VM and add SSH Key</li> <li>Save VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to ssh in with correct SSH private key</li> <li>You should not be able to SSH in with incorrect SSH private key</li> </ol> Edit vm config after Eject CDROM and delete volume https://harvester.github.io/tests/manual/virtual-machines/5264-edit-vm-config-after-eject-cdrom-delete-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/5264-edit-vm-config-after-eject-cdrom-delete-volume/ - Related issues: #5264 [BUG] After EjectCD from vm and edit config of vm displays empty page: &ldquo;Cannot read properties of null&rdquo; Category: Virtual Machines Verification Steps Upload the ISO type desktop image (e.g ubuntu-20.04.4-desktop-amd64.iso) Create a vm named vm1 with the iso image Open the web console to check content Click EjectCD after vm running Select the delete volume option Wait until vm restart to running Click the edit config Back to the virtual machine page Click the vm1 name Expected Results Check can edit vm config of vm1 to display all settings correctly Check can display the current vm1 settings correctly + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5264">#5264</a> [BUG] After EjectCD from vm and edit config of vm displays empty page: &ldquo;Cannot read properties of null&rdquo;</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machines</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Upload the ISO type desktop image (e.g ubuntu-20.04.4-desktop-amd64.iso)</li> <li>Create a vm named <code>vm1</code> with the iso image</li> <li>Open the web console to check content</li> <li>Click EjectCD after vm running</li> <li>Select the <code>delete volume</code> option</li> <li>Wait until vm restart to running</li> <li>Click the edit config</li> <li>Back to the virtual machine page</li> <li>Click the <code>vm1</code> name</li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Check can edit vm config of <code>vm1</code> to display all settings correctly <img src="https://harvester.github.io/tests/images/virtual-machines/5264-edit-vm-cofig-after-delete-volume.png" alt="images/virtual-machines/5264-edit-vm-cofig-after-delete-volume.png"></li> <li>Check can display the current <code>vm1</code> settings correctly</li> </ul> Edit VM Form Negative https://harvester.github.io/tests/manual/virtual-machines/negative-edit-vm-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-edit-vm-form/ - In a multi-node setup disconnect/shutdown the node where the VM is running Edit the VM via form Save the VM Expected Results You should not be able to save the edited Form You should get an error + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Edit the VM via form</li> <li>Save the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to save the edited Form</li> <li>You should get an error</li> </ol> Edit vm network and verify the network is working as per configuration (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-network-and-verify-the-network-is-working-as-per-configuration-/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-network-and-verify-the-network-is-working-as-per-configuration-/ - Edit VM network Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML Network should function as desired + <ol> <li>Edit VM network</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> </ul> </li> <li>Network should function as desired</li> </ol> Edit VM via form with CPU https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via form with CPU and Memory https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-cpu-and-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via form with Memory https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-form-with-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with CPU (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with CPU and Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-cpu-and-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM via YAML with Memory (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-vm-via-yaml-with-memory/ - Edit VM Save Expected Results VM should save VM should restart if restart checkbox is checked Changes should show In Form In YAML In VM list + <ol> <li>Edit VM</li> <li>Save</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should restart if restart checkbox is checked</li> <li>Changes should show <ul> <li>In Form</li> <li>In YAML</li> <li>In VM list</li> </ul> </li> </ol> Edit VM with resources that are only on one node in cluster CPU and Memory https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/edit-a-vm-with-resources-that-are-only-on-one-node-in-cluster-cpu-and-memory/ - Edit a VM with resources that are only available on one node in cluster. Expected Results VM should save VM should be reassigned to node that has available resources VM should boot VM should pass health checks + <ol> <li>Edit a VM with resources that are only available on one node in cluster.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should save</li> <li>VM should be reassigned to node that has available resources</li> <li>VM should boot</li> <li>VM should pass health checks</li> </ol> Edit VM YAML Negative https://harvester.github.io/tests/manual/virtual-machines/q-negative-edit-vm-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/q-negative-edit-vm-yaml/ - In a multi-node setup disconnect/shutdown the node where the VM is running Edit the VM via YAML Save the VM Expected Results SSH to the node and check the nodes has components deleted. + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Edit the VM via YAML</li> <li>Save the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>SSH to the node and check the nodes has components deleted.</li> </ol> Memory overcommit on VM https://harvester.github.io/tests/manual/virtual-machines/memory_overcommit/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/memory_overcommit/ - Ref: https://github.com/harvester/harvester/issues/1537 Verify Items Overcommit can be edit on Dashboard VM can allocate exceed Memory on the host Node VM can chage allocated Memory after created Case: Update Overcommit configuration Install Harvester with any Node Login to Dashboard, then navigate to Advanced Settings Edit overcommit-config The field of Memory should be editable Created VM can allocate maximum Memory should be &lt;HostMemory&gt; * [&lt;overcommit-Memory&gt;/100] - &lt;Host Reserved&gt; Case: VM can allocate Memory more than Host have Install Harvester with any Node Create a cloud image for VM Creation Create a VM with &lt;HostMemory&gt; * 1. + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1537">https://github.com/harvester/harvester/issues/1537</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Overcommit can be edit on Dashboard</li> <li>VM can allocate exceed Memory on the host Node</li> <li>VM can chage allocated Memory after created</li> </ul> <h2 id="case-update-overcommit-configuration">Case: Update Overcommit configuration</h2> <ol> <li>Install Harvester with any Node</li> <li>Login to Dashboard, then navigate to <strong>Advanced Settings</strong></li> <li>Edit <code>overcommit-config</code></li> <li>The field of <strong>Memory</strong> should be editable</li> <li>Created VM can allocate maximum Memory should be <code>&lt;HostMemory&gt; * [&lt;overcommit-Memory&gt;/100] - &lt;Host Reserved&gt;</code></li> </ol> <h2 id="case-vm-can-allocate-memory-more-than-host-have">Case: VM can allocate Memory more than Host have</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostMemory&gt; * 1.2</code> Memory</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated Memory</li> <li>Page of Virtual Machines should display allocated Memory correctly</li> </ol> <h2 id="case-update-vm-allocated-memory">Case: Update VM allocated Memory</h2> <ol> <li>Install Harvester with any Node</li> <li>Create a cloud image for VM Creation</li> <li>Create a VM with <code>&lt;HostMemory&gt; * 1.2</code> Memory</li> <li>VM should start successfully</li> <li>Increase/Reduce VM allocated Memory to minimum/maximum</li> <li>VM should start successfully</li> <li><code>lscpu</code> in VM should display allocated Memory</li> <li>Page of Virtual Machines should display allocated Memory correctly</li> </ol> Negative vm clone tests https://harvester.github.io/tests/manual/virtual-machines/negative-vm-clone/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-vm-clone/ - Case 1 Create a harvester cluster. Create a VM source-vm with 3 volumes: Image Volume Volume Container Volume After VM starts, run command echo &quot;123&quot; &gt; test.txt &amp;&amp; sync. Click clone button on the source-vm and input new VM name target-vm. Delete source-vm while still cloning Expected Results target-vm should finish cloning After cloning run command cat ~/test.txt in the target-vm. The result should be 123. Case 2 Create a harvester cluster. + <h3 id="case-1">Case 1</h3> <ol> <li>Create a harvester cluster.</li> <li>Create a VM <code>source-vm</code> with 3 volumes: <ul> <li>Image Volume</li> <li>Volume</li> <li>Container Volume</li> </ul> </li> <li>After VM starts, run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code>.</li> <li>Click <code>clone</code> button on the <code>source-vm</code> and input new VM name <code>target-vm</code>.</li> <li>Delete <code>source-vm</code> while still cloning</li> </ol> <h4 id="expected-results">Expected Results</h4> <ul> <li><code>target-vm</code> should finish cloning</li> <li>After cloning run command <code>cat ~/test.txt</code> in the <code>target-vm</code>. The result should be <code>123</code>.</li> </ul> <h3 id="case-2">Case 2</h3> <ol> <li>Create a harvester cluster.</li> <li>Create a VM <code>source-vm</code> with 3 volumes: <ul> <li>Image Volume</li> <li>Volume</li> <li>Container Volume</li> </ul> </li> <li>After VM starts, run command <code>echo &quot;123&quot; &gt; test.txt &amp;&amp; sync</code>.</li> <li>Click <code>clone</code> button on the <code>source-vm</code> and input new VM name <code>target-vm</code>.</li> <li>Turn off node that has <code>source-vm</code> while cloning</li> <li>Wait for clone to finish</li> </ol> <h4 id="expected-results-1">Expected Results</h4> <ul> <li><code>target-vm</code> should finish cloning on node</li> <li><code>source-vm</code> should have migrated to new node</li> <li>After cloning run command <code>cat ~/test.txt</code> in the <code>target-vm</code>. The result should be <code>123</code>.</li> </ul> Run multiple instances of the console https://harvester.github.io/tests/manual/virtual-machines/run-multiple-instances-console/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/run-multiple-instances-console/ - Open up the console on two browsers to simulate multiple connections Login with both browsers create a new file on both instances Edit the file from the other instance and save Verify that you can see the changes from the other instance Expected Results You should be able to login from multiple browsers File should create File should update You should be able to see changes from all instances + <ol> <li>Open up the console on two browsers to simulate multiple connections</li> <li>Login with both browsers</li> <li>create a new file on both instances</li> <li>Edit the file from the other instance and save</li> <li>Verify that you can see the changes from the other instance</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should be able to login from multiple browsers</li> <li>File should create</li> <li>File should update</li> <li>You should be able to see changes from all instances</li> </ol> Start VM and stop node Negative https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm-and-stop-node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm-and-stop-node/ - Start the VM In a multi-node setup disconnect/shutdown the node where the VM is running Expected Results You should not be able to start the VM + <ol> <li>Start the VM</li> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to start the VM</li> </ol> Start VM Negative (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-start-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Start the VM Expected Results You should not be able to start the VM + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Start the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>You should not be able to start the VM</li> </ol> Stop VM Negative (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/negative-stop-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/negative-stop-vm/ - In a multi-node setup disconnect/shutdown the node where the VM is running Stop the VM Expected Results The VM list should quickly update to not running, or some other error state + <ol> <li>In a multi-node setup disconnect/shutdown the node where the VM is running</li> <li>Stop the VM</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>The VM list should quickly update to not running, or some other error state</li> </ol> Update image labels after deleting source VM https://harvester.github.io/tests/manual/virtual-machines/1602-update-labels-on-image-after-vm-delete/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1602-update-labels-on-image-after-vm-delete/ - Related issues: #1602 exported image can&rsquo;t be deleted after vm removed Verification Steps create vm &ldquo;vm-1&rdquo; create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo; delete vm &ldquo;vm-1&rdquo; update image &ldquo;img-1&rdquo; labels Expected Results image &ldquo;img-1&rdquo; will be updated + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1602">#1602</a> exported image can&rsquo;t be deleted after vm removed</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>create vm &ldquo;vm-1&rdquo;</li> <li>create a image &ldquo;img-1&rdquo; by export the volume used by vm &ldquo;vm-1&rdquo;</li> <li>delete vm &ldquo;vm-1&rdquo;</li> <li>update image &ldquo;img-1&rdquo; labels</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image &ldquo;img-1&rdquo; will be updated</li> </ol> Validate QEMU agent installation https://harvester.github.io/tests/manual/virtual-machines/1235-check-qemu-installation/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1235-check-qemu-installation/ - Related issues: #1235 QEMU agent is not installed by default when creating VMs Verification Steps Creat openSUSE VM Start VM check for qemu-ga package Create Ubuntu VM Start VM Check for qemu-ga package Expected Results VMs should start Packages should be present + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1235">#1235</a> QEMU agent is not installed by default when creating VMs</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Creat openSUSE VM</li> <li>Start VM</li> <li>check for qemu-ga package</li> <li>Create Ubuntu VM</li> <li>Start VM</li> <li>Check for qemu-ga package</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VMs should start</li> <li>Packages should be present</li> </ol> Verify operations like Stop, restart, pause, download YAML, generate template (e2e_be) https://harvester.github.io/tests/manual/virtual-machines/verify-operations-like-stop-restart-pause-download-yaml-generate-template/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/verify-operations-like-stop-restart-pause-download-yaml-generate-template/ - Take an existing VM and Press the appropriate buttons for the associated operations Stop Restart Pause Download YAML Generate Template Expected Results All operations should complete successfully + <ol> <li>Take an existing VM and Press the appropriate buttons for the associated operations <ul> <li>Stop</li> <li>Restart</li> <li>Pause</li> <li>Download YAML</li> <li>Generate Template</li> </ul> </li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>All operations should complete successfully</li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/virtual-machines/1660-volume-unit-vm-details/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/1660-volume-unit-vm-details/ - Related issues: #1660 The volume unit on the vm details page is incorrect Verification Steps Create new .1G volume Create new VM Create with raw-image template Add opensuse base image Add .1G Volume Verify size in VM details on volume tab Expected Results Size should show as .1G + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1660">#1660</a> The volume unit on the vm details page is incorrect</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new .1G volume</li> <li>Create new VM</li> <li>Create with raw-image template</li> <li>Add opensuse base image</li> <li>Add .1G Volume</li> <li>Verify size in VM details on volume tab <img src="https://user-images.githubusercontent.com/83787952/145658516-73f5c72c-2543-46cd-9f90-8bc47f5ce2d4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Size should show as .1G</li> </ol> View log function on virtual machine https://harvester.github.io/tests/manual/virtual-machines/5266-view-log-function/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/5266-view-log-function/ - Related issues: #5266 [BUG] Click View Logs option on virtual machine dashboard can&rsquo;t display any log entry Category: Virtual Machines Verification Steps Create one virtual machines named vm1 in the Harvester virtual machine page Wait until the vm1 in running state Click the View Logs in the side option menu Check the log panel content of vm Click the Clear button Click the Download button Enter some query sting in the Filter field Click settings, change the Show the latest to different options Uncheck/Check the Wrap Lines Uncheck/Check the Show Timestamps Expected Results Should display the detailed log entries on the vm log panel including timestamp and content All existing logs would be cleaned up Ensure new logs will display on the panel Check can correctly download the log to the . + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/5266">#5266</a> [BUG] Click View Logs option on virtual machine dashboard can&rsquo;t display any log entry</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Virtual Machines</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create one virtual machines named <code>vm1</code> in the Harvester virtual machine page</li> <li>Wait until the <code>vm1</code> in running state</li> <li>Click the View Logs in the side option menu</li> <li>Check the log panel content of vm</li> <li>Click the <code>Clear</code> button</li> <li>Click the <code>Download</code> button</li> <li>Enter some query sting in the <code>Filter</code> field</li> <li>Click settings, change the <code>Show the latest</code> to different options</li> <li>Uncheck/Check the <code>Wrap Lines</code></li> <li>Uncheck/Check the <code>Show Timestamps</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ul> <li>Should display the detailed log entries on the vm log panel including timestamp and content <img src="https://harvester.github.io/tests/images/virtual-machines/5266-view-vm-log.png" alt="images/virtual-machines/5266-view-vm-log.png"></li> <li>All existing logs would be cleaned up</li> <li>Ensure new logs will display on the panel</li> <li>Check can correctly download the log to the <code>.log</code> file and contain all the details</li> <li>Check the log entries contains the filter string can display correctly</li> <li>Check each different options of <code>Show the latest</code> log option can display log according to the settings</li> <li>Check the log entries can be wrapped or unwrapped</li> <li>Check the log entries can display with/without timestamp</li> </ul> VM on error state https://harvester.github.io/tests/manual/virtual-machines/vm_on_error_state/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_on_error_state/ - Ref: https://github.com/harvester/harvester/issues/1446 https://github.com/harvester/harvester/issues/982 Verify Items Error message should displayed when VM can&rsquo;t be scheduled VM&rsquo;s state should be changed when host is down Case: Create a VM that no Node can host it Install Harvester with any nodes download a image to create VM create a VM with over-commit (consider to over-provisioning feature, double or triple the host resource would be more reliable.) VM should shows Starting state, and an alart icon shows aside. + <p>Ref:</p> <ul> <li><a href="https://github.com/harvester/harvester/issues/1446">https://github.com/harvester/harvester/issues/1446</a></li> <li><a href="https://github.com/harvester/harvester/issues/982">https://github.com/harvester/harvester/issues/982</a></li> </ul> <h2 id="verify-items">Verify Items</h2> <ul> <li>Error message should displayed when VM can&rsquo;t be scheduled</li> <li>VM&rsquo;s <strong>state</strong> should be changed when host is down</li> </ul> <h2 id="case-create-a-vm-that-no-node-can-host-it">Case: Create a VM that no Node can host it</h2> <ol> <li>Install Harvester with any nodes</li> <li>download a image to create VM</li> <li>create a VM with over-commit (consider to over-provisioning feature, double or triple the host resource would be more reliable.)</li> <li>VM should shows <strong>Starting</strong> state, and an alart icon shows aside.</li> <li>hover to the icon, pop-up message should display messages like <code>0/N nodes are available: n insufficient ...</code></li> </ol> <h2 id="case-vms-state-changed-to-not-ready-when-the-host-is-down">Case: VM&rsquo;s state changed to <strong>Not Ready</strong> when the host is down</h2> <ol> <li>Install Harvester with 2+ nodes</li> <li>Create an Image for VM creation</li> <li>Create a VM and wait until state becomes <strong>Running</strong></li> <li>Reboot the node which hosting the VM</li> <li>Node&rsquo;s <em>State</em> should be <code>In Progress</code> in <em><strong>Hosts</strong></em> page</li> <li>VM&rsquo;s <em>State</em> should be <code>Not Ready</code> in <em><strong>Virtual Machines</strong></em> page</li> </ol> VM scheduling on Specific node (e2e_fe) https://harvester.github.io/tests/manual/virtual-machines/vm_schedule_on_node/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_schedule_on_node/ - Ref: https://github.com/harvester/harvester/issues/1350 Verify Items Node which is not active should not be listed in Node Scheduling list Case: Schedule VM on the Node which is Enable Maintenance Mode Install Harvester with at least 2 nodes Login and Navigate to Virtual Machines Create VM and Select Run VM on specific node(s)... All Active nodes should in the list Navigate to Host and pick node(s) to Enable Maintenance Mode Make sure Node(s) state changed into Maintenance Mode Repeat step 2 and 3 Picked Node(s) should not in the list Revert picked Node(s) to back to state of Active Repeat step 2 to 4 + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1350">https://github.com/harvester/harvester/issues/1350</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>Node which is not active should not be listed in <strong>Node Scheduling</strong> list</li> </ul> <h2 id="case-schedule-vm-on-the-node-which-is-enable-maintenance-mode">Case: Schedule VM on the Node which is <strong>Enable Maintenance Mode</strong></h2> <ol> <li>Install Harvester with at least 2 nodes</li> <li>Login and Navigate to <em>Virtual Machines</em></li> <li>Create VM and Select <code>Run VM on specific node(s)...</code></li> <li>All <em><strong>Active</strong></em> nodes should in the list</li> <li>Navigate to <em>Host</em> and pick node(s) to <strong>Enable Maintenance Mode</strong></li> <li>Make sure Node(s) state changed into <em><strong>Maintenance Mode</strong></em></li> <li>Repeat step 2 and 3</li> <li>Picked Node(s) should not in the list</li> <li>Revert picked Node(s) to back to state of <em><strong>Active</strong></em></li> <li>Repeat step 2 to 4</li> </ol> VM's CPU maximum limitation https://harvester.github.io/tests/manual/virtual-machines/vm_cpu_limits/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/virtual-machines/vm_cpu_limits/ - Ref: https://github.com/harvester/harvester/issues/1565 Verify Items VM&rsquo;s maximum CPU amount should not have limitation. Case: Create VM with large CPU amount Install harvester with any nodes Create image for VM creation Create a VM with vCPU over than 100 Start VM and verify lscpu shows the same amount + <p>Ref: <a href="https://github.com/harvester/harvester/issues/1565">https://github.com/harvester/harvester/issues/1565</a></p> <h2 id="verify-items">Verify Items</h2> <ul> <li>VM&rsquo;s maximum CPU amount should not have limitation.</li> </ul> <h2 id="case-create-vm-with-large-cpu-amount">Case: Create VM with large CPU amount</h2> <ol> <li>Install harvester with any nodes</li> <li>Create image for VM creation</li> <li>Create a VM with vCPU over than 100</li> <li>Start VM and verify <code>lscpu</code> shows the same amount</li> </ol> diff --git a/manual/volumes/index.xml b/manual/volumes/index.xml index a21bd7c08..9d5d44fe5 100644 --- a/manual/volumes/index.xml +++ b/manual/volumes/index.xml @@ -12,133 +12,133 @@ https://harvester.github.io/tests/manual/volumes/1401-support-volume-hot-unplug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1401-support-volume-hot-unplug/ - Related issues: #1401 Http proxy setting download image Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it Verification Steps Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click de-attach volume Add volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Http proxy setting download image</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h5 id="scenario1-live-migrate-vm-already-have-hot-plugged-volume-to-new-node-then-detach-hot-unplug-it">Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it</h5> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click de-attach volume</li> <li>Add volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Add/remove disk to Host config https://harvester.github.io/tests/manual/volumes/1623-add-disk-to-host/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1623-add-disk-to-host/ - Related issues: #1623 Unable to add additional disks to host config Environment setup Add Disk that isn&rsquo;t assigned to host Verification Steps Head to &ldquo;Hosts&rdquo; page Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab Validate: Open dropdown and see no disks Attach a disk on that node Validate: Open dropdown and see some disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Detach a disk on that node Validate: Open dropdown and see no disks Verify that host shows new disk as available storage and Longhorn is showing new schedulable space Expected Results Disk space should show appropriately + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1623">#1623</a> Unable to add additional disks to host config</li> </ul> <h2 id="environment-setup">Environment setup</h2> <ol> <li>Add Disk that isn&rsquo;t assigned to host</li> </ol> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Head to &ldquo;Hosts&rdquo; page</li> <li>Click &ldquo;Edit Config&rdquo; on a node and switch to &ldquo;Disks&rdquo; tab</li> <li>Validate: Open dropdown and see no disks</li> <li>Attach a disk on that node</li> <li>Validate: Open dropdown and see some disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> <li>Detach a disk on that node</li> <li>Validate: Open dropdown and see no disks</li> <li>Verify that host shows new disk as available storage and Longhorn is showing new schedulable space</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Disk space should show appropriately <img src="https://user-images.githubusercontent.com/83787952/146289651-3c8b8da7-5ba1-4a15-aa4f-32f24af4b8dc.png" alt="image"></li> </ol> Check Longhorn volume mount point https://harvester.github.io/tests/manual/volumes/1667-check-longhorn-volume-mount/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1667-check-longhorn-volume-mount/ - Related issues: #1667 data partition is not mounted to the LH path properly Verification Steps Install Harvester node in VM from ISO Check partitions with lsblk -f Verify mount point of /var/lib/longhorn Expected Results Mount point should show /var/lib/longhorn + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1667">#1667</a> data partition is not mounted to the LH path properly</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Install Harvester node in VM from ISO</li> <li>Check partitions with <code>lsblk -f</code></li> <li>Verify mount point of <code>/var/lib/longhorn</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Mount point should show <code>/var/lib/longhorn</code> <img src="https://user-images.githubusercontent.com/83787952/146290004-0584f817-d9df-4f4d-9069-d3ed4199b30f.png" alt="image"></li> </ol> Create image from Volume(e2e_fe) https://harvester.github.io/tests/manual/volumes/create-image-from-volume/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-image-from-volume/ - Create new VM Add SSH key Run through iterations for 1, 2, and 3 for attached bash script Export volume to image from volumes page Create new VM from image Run md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5 Expected Results image should upload/complete in images page New VM should create SSH key should work on new VM file2.md5 should fail and the other three md5 checks should pass Comments #!/bin/bash # first file if [ $1 = 1 ] then dd if=/dev/urandom of=file1. + <ol> <li>Create new VM</li> <li>Add SSH key</li> <li>Run through iterations for 1, 2, and 3 for attached bash script</li> <li>Export volume to image from volumes page</li> <li>Create new VM from image</li> <li>Run <code>md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5</code></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>image should upload/complete in images page</li> <li>New VM should create</li> <li>SSH key should work on new VM</li> <li>file2.md5 should fail and the other three md5 checks should pass</li> </ol> <h3 id="comments">Comments</h3> <pre tabindex="0"><code>#!/bin/bash # first file if [ $1 = 1 ] then dd if=/dev/urandom of=file1.txt count=100 bs=1M md5sum file1.txt &gt; file1.md5 md5sum -c file1.md5 fi ## overwrite file1 and create file2 if [ $1 = 2 ] then dd if=/dev/urandom of=file1.txt count=100 bs=1M dd if=/dev/urandom of=file2.txt count=100 bs=1M md5sum file1.txt &gt; file1-2.md5 md5sum file2.txt &gt; file2.md5 md5sum -c file1.md5 file1-2.md5 file2.md5 fi ## overwrite file2 and create file3 if [ $1 = 3 ] then dd if=/dev/urandom of=file2.txt count=100 bs=1M dd if=/dev/urandom of=file3.txt count=100 bs=1M md5sum file2.txt &gt; file2-2.md5 md5sum file3.txt &gt; file3.md5 md5sum -c file2.md5 file1-2.md5 file2-2.md5 file3.md5 fi </code></pr Create Volume root disk blank Form with label https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-blank-form-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-blank-form-label/ - Navigate to volumes page Click Create Don&rsquo;t select an image Input a size Click Create Expected Results Page should load Volume should create successfully and go to succeeded in the list The label can be seen when you edit the volume config + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Don&rsquo;t select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>Volume should create successfully and go to succeeded in the list</li> <li>The label can be seen when you edit the volume config</li> </ol> Create volume root disk VM Image Form with label (e2e_be) https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form-label/ - Navigate to volumes page Click Create Select an image Input a size Click Create Expected Results Page should load Volume should create successfully and go to succeeded in the list The label can be seen when you edit the volume config + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>Volume should create successfully and go to succeeded in the list</li> <li>The label can be seen when you edit the volume config</li> </ol> Create volume root disk VM Image Form(e2e_fe) https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/create-volume-root-disk-vm-image-form/ - Navigate to volumes page Click Create Select an image Input a size Click Create Expected Results VM should create VM should pass health checks + <ol> <li>Navigate to volumes page</li> <li>Click Create</li> <li>Select an image</li> <li>Input a size</li> <li>Click Create</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>VM should pass health checks</li> </ol> Delete volume that is not attached to a VM (e2e_be_fe) https://harvester.github.io/tests/manual/volumes/delete-volume-that-is-not-attached-to-vm/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/delete-volume-that-is-not-attached-to-vm/ - Create volume Validate that it created Check the volume crd. Delete the volume Verify that volume is removed from list Check the volume object doesn&rsquo;t exist anymore. Expected Results Volume should create It should show in volume list Volume crd should have correct info. Volume should delete. Volume should be removed from list + <ol> <li>Create volume</li> <li>Validate that it created</li> <li>Check the volume crd.</li> <li>Delete the volume</li> <li>Verify that volume is removed from list</li> <li>Check the volume object doesn&rsquo;t exist anymore.</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should create</li> <li>It should show in volume list</li> <li>Volume crd should have correct info.</li> <li>Volume should delete.</li> <li>Volume should be removed from list</li> </ol> Delete volume that was attached to VM but now is not (e2e_be_fe) https://harvester.github.io/tests/manual/volumes/delete-volume-that-was-attached-to-vm-but-is-not-now/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/delete-volume-that-was-attached-to-vm-but-is-not-now/ - Create a VM with a root volume Write 10Gi data into it. Delete the VM but not the volume Verify Volume still exists Check disk space on node Delete the volume Verify that volume is removed from list Check disk space on node Expected Results VM should create 10Gi space should be consumed on the disk. VM should delete Volume should still show in Volume list Disk space should show 10Gi + Volume should delete Volume should be removed from list Space should be less than before + <ol> <li>Create a VM with a root volume</li> <li>Write 10Gi data into it.</li> <li>Delete the VM but not the volume</li> <li>Verify Volume still exists</li> <li>Check disk space on node</li> <li>Delete the volume</li> <li>Verify that volume is removed from list</li> <li>Check disk space on node</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should create</li> <li>10Gi space should be consumed on the disk.</li> <li>VM should delete</li> <li>Volume should still show in Volume list</li> <li>Disk space should show 10Gi +</li> <li>Volume should delete</li> <li>Volume should be removed from list</li> <li>Space should be less than before</li> </ol> Detach volume from virtual machine (e2e_fe) https://harvester.github.io/tests/manual/volumes/detach-volume-from-virtual-machine/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/detach-volume-from-virtual-machine/ - Related issues: #1708 After click &ldquo;Detach volume&rdquo; button, nothing happend Category: Volume Verification Steps Create several new volume in volumes page Create a virtual machine Click the config button on the selected virtual machine Click Add volume and add at least two new volume Click the Detach volume button on the attached volume Repeat above steps several times Expected Results Currently when click the Detach volume button, attached volume can be detach successfully. + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1708">#1708</a> After click &ldquo;Detach volume&rdquo; button, nothing happend</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Volume</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create several new volume in volumes page <img src="https://user-images.githubusercontent.com/29251855/146900871-50fad5fa-2d25-4559-b10b-55e276d7edb8.png" alt="image"></li> <li>Create a virtual machine</li> <li>Click the config button on the selected virtual machine</li> <li>Click Add volume and add at least two new volume <img src="https://user-images.githubusercontent.com/29251855/146901117-dac73494-d8fd-4e1c-9a74-eed76fc14511.png" alt="image"></li> <li>Click the <code>Detach volume</code> button on the attached volume <img src="https://user-images.githubusercontent.com/29251855/146901585-51df212b-5443-4961-b648-6db265c272c2.png" alt="image"></li> </ol> <p><img src="https://user-images.githubusercontent.com/29251855/146901235-6607a936-884b-41d9-94e2-372e8c028334.png" alt="image"></p> Edit Volume Form add label https://harvester.github.io/tests/manual/volumes/edit-volume-form-add-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-form-add-label/ - Navigate to volumes page Edit Volume with Form Click Labels Add label Click Save Open VM again and click the config tab Verify that label was saved Expected Results Volume should save Label should add Label should show when re-opened + <ol> <li>Navigate to volumes page</li> <li>Edit Volume with Form</li> <li>Click Labels</li> <li>Add label</li> <li>Click Save</li> <li>Open VM again and click the config tab</li> <li>Verify that label was saved</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should save</li> <li>Label should add</li> <li>Label should show when re-opened</li> </ol> Edit volume increase size via form (e2e_fe) https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-form/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-form/ - Stop the vm Navigate to volumes page Edit Volume via form Increase size Click Save Connect to VM via console Check size of root disk Expected Results VM should stop VM should reboot after saving Disk should be resized + <ol> <li>Stop the vm</li> <li>Navigate to volumes page</li> <li>Edit Volume via form</li> <li>Increase size</li> <li>Click Save</li> <li>Connect to VM via console</li> <li>Check size of root disk</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should stop</li> <li>VM should reboot after saving</li> <li>Disk should be resized</li> </ol> Edit volume increase size via YAML (e2e_be) https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-yaml/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-increase-size-yaml/ - Stop the vm Navigate to volumes page Edit Volume as YAML Increase size Click Save Connect to VM via console Check size of root disk Expected Results VM should stop VM should reboot after saving Disk should be resized + <ol> <li>Stop the vm</li> <li>Navigate to volumes page</li> <li>Edit Volume as YAML</li> <li>Increase size</li> <li>Click Save</li> <li>Connect to VM via console</li> <li>Check size of root disk</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>VM should stop</li> <li>VM should reboot after saving</li> <li>Disk should be resized</li> </ol> Edit Volume YAML add label (e2e_be) https://harvester.github.io/tests/manual/volumes/edit-volume-yaml-add-label/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/edit-volume-yaml-add-label/ - Navigate to volumes page Edit Volume as YAML Add label to config Click Save Open VM again and click the config tab Verify that label was saved Expected Results Volume should save Label should add Label should show when re-opened + <ol> <li>Navigate to volumes page</li> <li>Edit Volume as YAML</li> <li>Add label to config</li> <li>Click Save</li> <li>Open VM again and click the config tab</li> <li>Verify that label was saved</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Volume should save</li> <li>Label should add</li> <li>Label should show when re-opened</li> </ol> Negative delete Volume that is in use (e2e_be) https://harvester.github.io/tests/manual/volumes/negative-delete-volume-that-is-in-use/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/negative-delete-volume-that-is-in-use/ - Navigate to Volumes page and check for a volume in use by a VM Try to delete volume Click delete on modal Expected Results Page should load You should get an error message on the delete modal + <ol> <li>Navigate to Volumes page and check for a volume in use by a VM</li> <li>Try to delete volume</li> <li>Click delete on modal</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Page should load</li> <li>You should get an error message on the delete modal</li> </ol> Support Volume Hot Unplug (e2e_fe) https://harvester.github.io/tests/manual/volumes/support-volume-hot-unplug/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/support-volume-hot-unplug/ - Related issues: #1401 Support volume hot-unplug Category: Storage Environment setup Setup an airgapped harvester Create an 3 nodes harvester cluster with large size disks Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it Verification Steps Create a virtual machine Create several volumes (without image) Add volume, hot-plug volume to virtual machine Open virtual machine, find hot-plugged volume Click de-attach volume Add volume again Expected Results Can hot-plug volume without error Can hot-unplug the pluggable volumes without restarting VM The de-attached volume can also be hot-plug and mount back to VM + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1401">#1401</a> Support volume hot-unplug</li> </ul> <h2 id="category">Category:</h2> <ul> <li>Storage</li> </ul> <h2 id="environment-setup">Environment setup</h2> <p>Setup an airgapped harvester</p> <ol> <li>Create an 3 nodes harvester cluster with large size disks</li> </ol> <h5 id="scenario1-live-migrate-vm-already-have-hot-plugged-volume-to-new-node-then-detach-hot-unplug-it">Scenario1: Live migrate VM already have hot-plugged volume to new node, then detach (hot-unplug) it</h5> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create a virtual machine</li> <li>Create several volumes (without image)</li> <li>Add volume, hot-plug volume to virtual machine</li> <li>Open virtual machine, find hot-plugged volume</li> <li>Click de-attach volume</li> <li>Add volume again</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Can hot-plug volume without error</li> <li>Can hot-unplug the pluggable volumes without restarting VM</li> <li>The de-attached volume can also be hot-plug and mount back to VM</li> </ol> Validate volume shows as in use when attached (e2e_be) https://harvester.github.io/tests/manual/volumes/validate-volume-shows-in-use-while-attached/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/validate-volume-shows-in-use-while-attached/ - Navigate to Volumes and check for a volume in use by a VM Verify that the state says In Use Expected Results State should show correctly + <ol> <li>Navigate to Volumes and check for a volume in use by a VM</li> <li>Verify that the state says In Use</li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>State should show correctly</li> </ol> Verify that vm-force-reset-policy works https://harvester.github.io/tests/manual/volumes/1660-volume-unit-vm-details/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1660-volume-unit-vm-details/ - Related issues: #1660 The volume unit on the vm details page is incorrect Verification Steps Create new .1G volume Create new VM Create with raw-image template Add opensuse base image Add .1G Volume Verify size in VM details on volume tab Expected Results Size should show as .1G + <ul> <li>Related issues: <a href="https://github.com/harvester/harvester/issues/1660">#1660</a> The volume unit on the vm details page is incorrect</li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li>Create new .1G volume</li> <li>Create new VM</li> <li>Create with raw-image template</li> <li>Add opensuse base image</li> <li>Add .1G Volume</li> <li>Verify size in VM details on volume tab <img src="https://user-images.githubusercontent.com/83787952/145658516-73f5c72c-2543-46cd-9f90-8bc47f5ce2d4.png" alt="image"></li> </ol> <h2 id="expected-results">Expected Results</h2> <ol> <li>Size should show as .1G</li> </ol> Verify that VMs stay up when disks are evicted https://harvester.github.io/tests/manual/volumes/1334-evict-disks-check-vms/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/volumes/1334-evict-disks-check-vms/ - Related issues #1334 Volumes fail with Scheduling Failure after evicting disc on multi-disc node #5307 Replicas should be evicted and rescheduled to other disks before removing extra disk Verification Steps Created 3 nodes Harvester. Added formatted disk (called disk A) to node0 VM in the harvester node page. Added disk tag test on following disk in the longhorn page. disk A of node0 root disk of node1 root disk of node2 Created storage class with disk tag test and replica 3. + <ul> <li>Related issues <ul> <li><a href="https://github.com/harvester/harvester/issues/1334">#1334</a> Volumes fail with Scheduling Failure after evicting disc on multi-disc node</li> <li><a href="https://github.com/harvester/harvester/issues/5307">#5307</a> Replicas should be evicted and rescheduled to other disks before removing extra disk</li> </ul> </li> </ul> <h2 id="verification-steps">Verification Steps</h2> <ol> <li> <p>Created 3 nodes Harvester.</p> </li> <li> <p>Added formatted disk (called disk A) to node0 VM in the harvester node page.</p> </li> <li> <p>Added disk tag <code>test</code> on following disk in the longhorn page.</p> diff --git a/manual/webhooks/index.xml b/manual/webhooks/index.xml index caa4d6554..e0328c667 100644 --- a/manual/webhooks/index.xml +++ b/manual/webhooks/index.xml @@ -12,42 +12,42 @@ https://harvester.github.io/tests/manual/webhooks/datavolumes.cdi.kubevirt.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/datavolumes.cdi.kubevirt.io/ - GUI Create a VM in GUI and wait until it&rsquo;s running. Assume its name is test. kube-api Try to delete its datavolume: $ kubectl get vms NAME AGE STATUS READY test 5m16s Running True There should be an datavolume bound to that VM $ kubectl get dvs NAME PHASE PROGRESS RESTARTS AGE test-disk-0-klrft Succeeded 100.0% 5m18s The user should not be able to delete the datavolume $ kubectl delete dv test-disk-0-klrft The request is invalid: : can not delete the volume test-disk-0-klrft which is currently attached to VMs: default/test `` ## Expected Results ### kube-api The deletion of its datavolume should fail. + <h2 id="gui">GUI</h2> <ol> <li>Create a VM in GUI and wait until it&rsquo;s running. Assume its name is test.</li> </ol> <h3 id="kube-api">kube-api</h3> <ol> <li>Try to delete its datavolume:</li> </ol> <pre tabindex="0"><code>$ kubectl get vms NAME AGE STATUS READY test 5m16s Running True </code></pre><ol> <li>There should be an datavolume bound to that VM</li> </ol> <pre tabindex="0"><code>$ kubectl get dvs NAME PHASE PROGRESS RESTARTS AGE test-disk-0-klrft Succeeded 100.0% 5m18s </code></pr keypairs.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/keypairs.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/keypairs.harvesterhci.io/ - GUI Enable VLAN network in settings Create a network with VLAN 5 and assume its name is my-network. C1. reate another network with VLAN 5: it should fails with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: VLAN ID 5 is already allocated Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal. + <h3 id="gui">GUI</h3> <ol> <li>Enable VLAN network in settings</li> <li>Create a network with VLAN 5 and assume its name is my-network. C1. reate another network with VLAN 5: it should fails with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: VLAN ID 5 is already allocated</li> <li>Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal.</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="gui-1">GUI</h3> <ol> <li>The operations should fail with.</li> </ol> network-attachment-definitions.k8s.cni.cncf.io https://harvester.github.io/tests/manual/webhooks/q-network-attachment-definitions.k8s.cni.cncf.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/q-network-attachment-definitions.k8s.cni.cncf.io/ - GUI Enable VLAN network in settings Create a network with VLAN 5 and assume its name is my-network. Create another network with VLAN 5: it should fails with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: VLAN ID 5 is already allocated Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal Expected Results GUI Unsure of desired behavior. + <h3 id="gui">GUI</h3> <ol> <li>Enable VLAN network in settings</li> <li>Create a network with VLAN 5 and assume its name is my-network.</li> <li>Create another network with VLAN 5: it should fails with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: VLAN ID 5 is already allocated</li> <li>Create a VM on VLAN 5, delete network my-network and it should fail with: admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: network my-network is still used by vm(s): vm-test in a modal</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="gui-1">GUI</h3> <p>Unsure of desired behavior. Checking and will update.</p> virtualmachineimages.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachineimages.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachineimages.harvesterhci.io/ - GUI Create an image from GUI Create another image with the same name. The operation should fail with admission webhook &ldquo;validator.harvesterhci.io&rdquo; denied the request: A resource with the same name exists kube-api Create an image from the manifest: $ cat image.yaml --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineImage metadata: generateName: image- namespace: default spec: sourceType: download displayName: cirros-0.5.1-x86_64-disk2.img url: http://192.168.2.106/cirros-0.5.1-x86_64-disk.img $ kubectl create -f image.yml virtualmachineimage.harvesterhci.io/image-8dkbq created Try to create an image with the same manifest: $ kubectl create -f image. + <h3 id="gui">GUI</h3> <ol> <li>Create an image from GUI</li> <li>Create another image with the same name. The operation should fail with admission webhook &ldquo;<a href="http://validator.harvesterhci.io/">validator.harvesterhci.io</a>&rdquo; denied the request: A resource with the same name exists</li> </ol> <h3 id="kube-api">kube-api</h3> <ol> <li>Create an image from the manifest:</li> </ol> <pre tabindex="0"><code>$ cat image.yaml --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineImage metadata: generateName: image- namespace: default spec: sourceType: download displayName: cirros-0.5.1-x86_64-disk2.img url: http://192.168.2.106/cirros-0.5.1-x86_64-disk.img $ kubectl create -f image.yml virtualmachineimage.harvesterhci.io/image-8dkbq created </code></pr virtualmachinerestores.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachinerestores.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachinerestores.harvesterhci.io/ - GUI Setup a backup target Create a backup from a VM. Assume the VM name is vm-test Wait until backup is done Restore the backup to a VM, enter vm-test in the Virtual Machine Name field kube-api $ cat restore.yaml 1 --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineRestore metadata: name: restore-aaaa namespace: default spec: newVM: false target: apiGroup: kubevirt.io kind: VirtualMachine name: &#34;&#34; virtualMachineBackupName: test $ kubectl create -f restore.yaml Expected Results GUI The operation should fail with admission webhook &ldquo;validator. + <h3 id="gui">GUI</h3> <ol> <li>Setup a backup target</li> <li>Create a backup from a VM. Assume the VM name is vm-test</li> <li>Wait until backup is done</li> <li>Restore the backup to a VM, enter vm-test in the Virtual Machine Name field</li> </ol> <h3 id="kube-api">kube-api</h3> <pre tabindex="0"><code>$ cat restore.yaml 1 --- apiVersion: harvesterhci.io/v1beta1 kind: VirtualMachineRestore metadata: name: restore-aaaa namespace: default spec: newVM: false target: apiGroup: kubevirt.io kind: VirtualMachine name: &#34;&#34; virtualMachineBackupName: test $ kubectl create -f restore.yaml </code></pr virtualmachinetemplateversions.harvesterhci.io https://harvester.github.io/tests/manual/webhooks/virtualmachinetemplateversions.harvesterhci.io/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/webhooks/virtualmachinetemplateversions.harvesterhci.io/ - kube-api List default templates: $ kubectl get virtualmachinetemplateversions.harvesterhci.io -n harvester-public GUI Go to Advanced -&gt; Templates page Create a new template and set it as the default version Try to delete the default version Expected Results kube-api Default templates should exist: NAME TEMPLATE_ID VERSION AGE iso-image-base-version 1 39m raw-image-base-version 1 39m windows-iso-image-base-version 1 39m GUI Creating a new template should succeed Deleting the default version of a template should fail with: admission webhook &ldquo;validator. + <h3 id="kube-api">kube-api</h3> <ol> <li>List default templates: <code>$ kubectl get virtualmachinetemplateversions.harvesterhci.io -n harvester-public</code></li> </ol> <h3 id="gui">GUI</h3> <ol> <li>Go to Advanced -&gt; Templates page</li> <li>Create a new template and set it as the default version</li> <li>Try to delete the default version</li> </ol> <h2 id="expected-results">Expected Results</h2> <h3 id="kube-api-1">kube-api</h3> <ol> <li>Default templates should exist:</li> </ol> <pre tabindex="0"><code>NAME TEMPLATE_ID VERSION AGE iso-image-base-version 1 39m raw-image-base-version 1 39m windows-iso-image-base-version 1 39m </code></pr