Skip to content
This repository has been archived by the owner on Sep 4, 2020. It is now read-only.

Multiple Tests in one Test File #147

Open
midacts opened this issue Apr 19, 2017 · 16 comments
Open

Multiple Tests in one Test File #147

midacts opened this issue Apr 19, 2017 · 16 comments

Comments

@midacts
Copy link
Contributor

midacts commented Apr 19, 2017

Currently, each test file checks one setting.
This proposal is to allow multiple settings to be able to be checked at once.

This would be beneficial for things like advanced settings or other large groups of settings that could be pulled all at once.

Personally, when you run Invoke-Vester you are running it against hosts, clusters, etc that should all be the same. Thereby, a new Config.json file should be specified for environments/hosts/clusters/etc that are different.

By grouping these large subset of settings all at once it should:

  • Prevent us the labor of making each individual tests
  • Prevent us from having to maintain those tests
  • The ability to check a large array of values with very little effort.
  • (There are probably more, but I'm not a good salesman/wordsmith)

For my NFS settings test, there are 23 settings, which would result in 23 individual tests.
vCenter has upwards of 840 advanced settings. I don't know if we will add more of those, but i wouldn't want to make 840 individual tests.

Expected Behavior

To entertain the idea of allowing multiple tests per test file

Current Behavior

One test per test file
VesterTemplate.Tests.ps1 is more or less expecting one test
(currently you get one failure test and you are out of the Try/Catch loop. More on this later)

Possible Solution

Gist link to possible solution

Test File - $Desired

$Desired will result in the value from the Config.json being imported as a 'PSCustomObject'. This is just host the ConvertFrom-Json cmdlet works.

(in the gist we convert $Desired into a 'hashtable' so we can compare it with $Actual)

Test File - $Type and $Actual

Quick side note: From my testing, $Actual should result with an output of a hashtable (meaning only 'key/name' and 'value')

I have $Actual result in an [ordered] hashtable
Note: $Type can be 'hashtable' or 'PSCustomObject' in this scenario. It doesnt matter. But i used 'hashtable' for consistency.

$All = [ordered]@{}
$NFS = Get-AdvancedSetting -Entity $Object | Where-Object {$_.Name -Match "NFS"} | Sort Name | Select Name,Value
$NFS | Foreach-Object { $All.Add($_.Name, $_.Value) }
$All

Which results in Config.json files like this:

                 "nfssettings":  {
                                     "NFS.HeartbeatFrequency":  12,
                                     "NFS.ReceiveBufferSize":  256,
                                     "NFS.VolumeRemountFrequency":  30,
                                     "NFS.LogNfsStat3":  0
                                     }

If we made the $Actual output as a 'PSCustomobject' using just this:
Get-AdvancedSetting -Entity $Object | Where-Object {$_.Name -Match "NFS"} | Sort Name | Select Name,Value
The Config.json output looks like this:

                 "nfssettings":  [
                                     "@{Name=NFS.ApdStartCount; Value=3}",
                                     "@{Name=NFS.DiskFileLockUpdateFreq; Value=10}",
                                     "@{Name=NFS.HeartbeatDelta; Value=5}",
                                     "@{Name=NFS.HeartbeatFrequency; Value=12}"
                                     ]

Note: $Type MUST BE 'PSCustomObject' in this scenario, or the results will be null

Test File - $Fix - AKA the Magic

One line to rule them all (in my scenario - and probably most that will use advanced settings):
Get-AdvancedSetting -Entity $Object -Name $DesiredObject.Name | Set-AdvancedSetting -Value $DesiredObject.Value -Confirm:$FALSE

We will get into this more below but when the test file is dot sourced, $Fix can be run and it references $DesiredObject for whatever the current value is (we loop through all of the $DesiredObjects individually)

So when a (multiple) test is run, it verifies $DesiredObject is equal to $ActualObject. If it isn't and you use -Remediate, it will push the current $DesiredObject.Value to the specific $DesiredObject.Name setting.

Now onto the VesterTemplate.Test.ps1

In my gist i have a few changes that i have issues open for, but regardless.

Everything is basically the same until line 104. I added an if/else that says:
if $Desired is a 'PSCustomObject' it's a test with multiple tests in it
else, do what we have been doing.

Pull in the results from $Actual

We get the results from $Actual and convert them to a 'PSCustomObject' (so we can compare them with $Desired as it too is a 'PSCustomObject').

Loop through all the objects in $Desired

First thing is i had to move the It in here.
Otherwise, the very first test that failed, Pester would no longer proceed with the rests of the tests.
and it just considered them to all be one large test, instead of individual ones.
Note: i added : $($DesiredObject.Name) after Title in the It clause so we could better see the individual test names.

First thing in the loop is to get the value from $ActualObjects (the 'PSCustomObject' from $Actual) that has the same name of the current $DesiredObject (the object we are checking against currently).

Now that we have our $DesiredObject and $ActualObject

If $DesiredObject is null, we still want to make sure it is null
else we compare the two objects. Since they are both 'PSCustomObjects', it runs Compare-Object with the -Property Name,Value specified.

Everything else is the same.
if Compare-Object fails, it goes into the Catch block.

(I think that covers everything. it's kind of late here after writing all this).
As a side note, i have Write-Host in there just for testing/debugging. We can for sure remove those.

Steps to Reproduce (for bugs)

Not really a bug

Context

Accomplish: Simplicity of test writing for large subsets of settings

Your Environment

  • Operating System: Windows 2012 R2
  • PowerShell version: 4.0
  • PowerCLI version: 6.5 Release 1 build 4624819
  • Vester module version used: 1.0.0
  • Pester version: 4.0.2
@brianbunke
Copy link
Contributor

Ugh. Really sorry I never responded to this. ☹️

@jpsider's doing us a favor and breathing life into this discussion via PR #184. Over there, we have a working proof of concept on what it would look like to test all vCenter advanced settings, one value to each file.

It's safe to say I didn't fully comprehend the inherent scaling issues here.

I'd love to see that branch compared to an example that tests all ~700 values in one test. I suspect that the performance change is going to speak for itself, and that it'll be obvious to work toward implementing this as you've laid out above.

@midacts
Copy link
Contributor Author

midacts commented Sep 26, 2017

It is possible that the way tests are run would be to be:
If a single test do X
If multiple tests to Y

Playing around with the config file to get it to work might need to happen as well.

I dabbled with it in the past.
If we are okay with potentially going down that path then it would be worthwhile. Otherwise it's one test per test file.

At a high level, I'd much rather work with one test file that 800 individual advanced option tests.

@jpsider
Copy link
Contributor

jpsider commented Sep 26, 2017

I would assume that it could be properly formatted in the json right? Under VC there would be an 'advanced setting option' that would open up to each of the advanced settings, then each one of those would run the specific test.

My only question would be how do you handle the 'type' if the test file does not change? right now the type is hard coded in the test file. (I'm not sure its properly used, but thats a separate issue!).

@midacts
Copy link
Contributor Author

midacts commented Sep 26, 2017 via email

@jpsider
Copy link
Contributor

jpsider commented Sep 26, 2017

I'm not sold on using 'type' either or the value it provides, or if it's used correctly. But I agree that's not the overarching theme is this Issue. Maintaining a bunch of files sucks, those 600+ I created are a bit crazy. However, there is automation in place to manage the file creation, and to capture the current state. The alternative is as you stated updating the current behavior in several places, if you want to test multiple settings with the same primary command. I don't think there is a right or wrong, just a general direction we need to head in once the line in the sand is drawn.

@midacts
Copy link
Contributor Author

midacts commented Sep 27, 2017 via email

@midacts
Copy link
Contributor Author

midacts commented Sep 29, 2017

I was looking through all my vester files i have i thought i didnt have my work anymore (which I don't) but luckily this inssue has a gist with the code changes i was working on: link

Basically there is an If/Else in there if it is sinlge value or multiple valued test.

That is where i'd think would be the first place to start the conversation or begin looking.

@midacts
Copy link
Contributor Author

midacts commented Sep 29, 2017

Maybe another use case for multiple settings in one test is checking the portgroups of a VMhost

@brianbunke
Copy link
Contributor

I don't want to hold this up if anyone feels like they need my blessing. Just to be clear: I'd like to see this prototyped.

I'd love to see [#184] compared to an example that tests all ~700 values in one test. I suspect that the performance change is going to speak for itself, and that it'll be obvious to work toward implementing this as you've laid out above.

@midacts
Copy link
Contributor Author

midacts commented Oct 15, 2017

I haven't gotten around to testing this out yet. I hope to be able to spend some hours on it this week.

@midacts
Copy link
Contributor Author

midacts commented Oct 16, 2017

gist (pretty much the same).

Outputs from the test:
vester

I'm write hosting some test for testing purposes.
1110 Advanced settings took 112 seconds.

The slowness is due to the foreach loop.
Maybe there is a better way to run the foreach loop in parallel to speed it up?

Side Note Comparison

Whenever I run New-VesterConfig the vCenter advanced setting tests take quite some time:
(26 seconds in this example)

Math:
8 AdvancedSettings == 26 seconds (1110 seconds if it was 1110 advanced settings [strange coincidence] - or 18.5 minutes)
1110 Advanced Settings == 112 seconds
vester-2

@midacts
Copy link
Contributor Author

midacts commented Nov 2, 2017

I'm thinking we might be able to do one Get-AdvancedSetting to get the current values.
The Config.json will have the expected values.

We should be able to get Compare-Object to do a super quick compare.

At that point, it should just be a matter of thinking about the $Fix section.
Maybe a ForEach fixing each incorrectly set value.
I don't think you could do a Set-AdvancedSetting on multiple settings/values at a time.

@midacts
Copy link
Contributor Author

midacts commented Dec 22, 2017

I think this may work.
Here's a link to the updated gist.

*Disclaimer - it is just a prototype *

I added a Compare-HashTable function i found here
I placed it in the Private folder so Vester can reference it.

With that we can compare hash tables super quick. so it compared all the host's advanced properties (1110 on my host) in 1.86s.

So outside of:
-Making Desired a hashtable (Line 103)
-Comparing hashtable (Line 108)

If you remediate, the test must loop through each failed value:
Line27-32

That's pretty much it.
What are you guys thoughts?
capture

@midacts
Copy link
Contributor Author

midacts commented Jan 11, 2018

Did anyone need any more information on this to progress this along?

@brianbunke
Copy link
Contributor

Bleh, sorry! I'm in new baby mode, and sleep has engulfed GitHub time lately. 😫

Haven't tested, but the basic overview looks good! I support the conversion to HT for performance.

Thanks again for taking the time to work on this! It's a pretty cool idea.

@midacts
Copy link
Contributor Author

midacts commented Jan 11, 2018

haha np. A few guys I work with are also in that boat.
I understand.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants