Skip to content
This repository has been archived by the owner on Sep 4, 2020. It is now read-only.

Rerun Tests After Remediation #101

Open
michaeltlombardi opened this issue Mar 17, 2017 · 17 comments
Open

Rerun Tests After Remediation #101

michaeltlombardi opened this issue Mar 17, 2017 · 17 comments
Assignees

Comments

@michaeltlombardi
Copy link
Contributor

Expected Behavior

When running a vester test with the -remediate flag, the command should re-execute the initial condition test after the remediation action - this ensures that the test results are actually as-expected post-remediation. This should be true for all published/included tests.

Current Behavior

At least some tests do not rerun the comparison post-remediation.

Possible Solution

Update the template to rerun the test again - this means test-writers don't have to think about it, they get it for free from the template.

Context

This would ensure that if a remediation completes without throwing an error the test will still fail if the conditions specified are not met.

Your Environment

  • Module version used: Vester 1.0.1
  • Operating System: Windows 10.0.14393
  • PowerShell version: 5.1.14393.693
@brianbunke brianbunke self-assigned this Mar 17, 2017
@midacts
Copy link
Contributor

midacts commented Mar 31, 2017

I think we would just have to rerun lines 93-94. Do you think that will solve the issue?
https://gist.github.com/Midacts/be1f8ed00cb13764996a31de3e65b5ce

(Lines 109-110 show the updated code)

@brianbunke
Copy link
Contributor

At a base level, yep, that's it! We may want to kick off & $Fix in a nested Try/Catch block, but I haven't decided on that yet. Would require writing back a brief section of $Error in the Catch.

@midacts
Copy link
Contributor

midacts commented Apr 10, 2017

I added in the try/catch and tested it it. I made it so if the check after the remediation fails, it will output:

  • $Desired and its gettype() output
  • $Actual and its gettype() output
  • $Type so we can see what it is looking for.

How I tested:
I had my netdump-settingsEnable.vester.ps1 file specify $Type = 'string'.
In my config file, i changed "false" to false, therefore making vester things false was no longer a string, but a boolean.

Below is an example output.
This change adds everything after "WARNING: Remediating host1.test.example.com".

I think it could be helpful to have all of this output for troubleshooting purposes. But maybe you guys might think it is overkill or too verbose.

Describing Host Configuration: NetDump-SettingsEnable.Vester.ps1
WARNING: Expected: value to be empty but it was { }
WARNING: Remediating host1.test.example.com
WARNING: Rerunning the 'Network Dump Enable' test against host1.test.example.com
[-] Host host1.test.example.com - Network Dump Enable 429ms
Expected: value to be empty but it was { }
Desired: False - bool
Actual: false - string
Type: string
at line: 134 in C:\Program Files\WindowsPowerShell\Modules\Vester\Private\Template\VesterTemplate.Tests.
ps1

I will open a PR after #114 is closed, since this change will affect that file as well.
The purposed changes are reflected in the below gist:
https://gist.github.com/Midacts/be1f8ed00cb13764996a31de3e65b5ce

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

Random question, what happens if the remediation fails, and the test returns fail. Second time? Is it now in a loop? Or does it just exit after the second check?

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

Sorry, another question, is there a need to run the second test again? Or should we just capture if the '$fix' was successful? I think the later is cleaner.

@midacts
Copy link
Contributor

midacts commented Apr 10, 2017

That's the part i was testing.

It runs the test
if the test fails and -Remediate is specified, it tries to remediate.
Then it reruns the same test to make sure it now succeeds.
it it fails again, it throws an error, basically as it would if you didnt kick off the -Remediate.
(I added in some additional logging if the test that runs after the remediate fails).

But after that part, it continues on to the next test.

@chriswahl
Copy link
Member

chriswahl commented Apr 10, 2017 via email

@midacts
Copy link
Contributor

midacts commented Apr 10, 2017

I see what you are saying.
I would say, just due to some of the testing i did today, even if $Fix succeeds, that does not necessarily mean that it will pass the test. Some other things could be misconfigured that lead to the test failing.
(which is how i was able to run my test scenario above).

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

Hmm, I would think that since the goal is to have a specific test, with a known state, value, etc. a retest after a fix would be a waste. The fix either works or it needs to be re-coded. The only reason I'm asking is that we just spent time on 'performance', so this sounds like a step in the wrong direction as far as efficiency goes. If the fix is not working right and returns that it was successful it will be picked up in the next round of tests.

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

I think something that would potentially be more useful would be a rerun after all tests complete, only on the items that were remediated. Kind of a second round but the scope would be limited. It would also provide less results for someone to review potentially.

@midacts
Copy link
Contributor

midacts commented Apr 10, 2017 via email

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

True, I don't think it's much time loss. But I'm wondering what the xml looks like now too. Are you able to post a snippet of that?

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

Oh, I'm not saying I'm right or worong, just throwing some thoughts out there.

@midacts
Copy link
Contributor

midacts commented Apr 10, 2017 via email

@jpsider
Copy link
Contributor

jpsider commented Apr 10, 2017

XML is awesome! Said no one ever, lol. But it has its uses. I use it with Vester as a way to enhance the results and share with my team.

@midacts
Copy link
Contributor

midacts commented Apr 12, 2017

This is going back to adding more info in the try/catch of the remediate. I was testing out some new tests and mine failed with this:

Desired: 1500 - int
Actual: 1000 - long
Type: int

It was really nice to know that it needed it to be a 'long' and not an 'int'.

In this scenario, i ran a gettype() on the object and it showed that it was an 'int64' which is a long. I figured 'int' would work, but apparently not.

@midacts
Copy link
Contributor

midacts commented Apr 12, 2017

I've been doing some more testing on this and wanted to bring something up for discussion.
I want to preface this that I think that my findings might make it so we will not want to rerun tests if -Remediate is specified.

Say we are running a VM test. In the desired, if we are just specifying $Object which houses the output from Get-VM, when you go to rerun the test if -Remediate is specified, the test will fail.
The reason for this (and this is the whole crux of the issue) is becuase $Object is referencing the original Get-VM value.

If you run a test and it fails and you remediate it, the value should be updated.
Unless we rerun the Get-VM, the test that is run will fail every time.

That being said, in the $Actual scriptblock, if we use (Get-VM $Object) instead, it will gaurentee the values are fresh no matter what we do. The problem with this is we will see a definite increase in the time it takes to run tests.

I would personally vote against re running tests after remediate.
I would instead suggest this:
-user runs a test
-they see something fails so they rerun the test with -Remediate
-after the remediate run, they rerun the tests to make sure things are good

As a side not, i would still really like to have a more verbose output on the else { block if a test fails in the VesterTemplate.Tests.ps1 file.

Meaning instead of what it is currently:

# -Remediate is not active, so just report the error
throw $_

to change it to:

# -Remediate is not active, so just report the error
throw "$($_.Exception.Message)`nDesired: $Desired - $($Desired.gettype())`nActual: $(& $Actual) - $($(& $Actual).gettype())`nType: $Type"

The reason being is currently, if a test fails, all we know is that it failed. The error it throws is:
Expected: value to be empty but it was { }. I think it might be nice to know a little bit of why it failed.

When the Compare-Object is run against $Actual and $Desired, if it finds any difference, it is the error Pester throws.

Here is the line:
Compare-Object -ReferenceObject $Desired -DifferenceObject (& $Actual -as $Type) | ShouldBeNullOrEmpty

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants