Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for tolerances in animation snapshot tests #38

Open
lukebradford opened this issue Jul 30, 2020 · 1 comment
Open

Add support for tolerances in animation snapshot tests #38

lukebradford opened this issue Jul 30, 2020 · 1 comment
Labels
enhancement Request for a new feature or improvement to an existing feature

Comments

@lukebradford
Copy link
Collaborator

Animation snapshot tests do not currently support per-pixel or overall tolerance, which are supported by the underlying FBSnapshotTestCase.

@lukebradford lukebradford added the enhancement Request for a new feature or improvement to an existing feature label Jul 30, 2020
@NickEntin
Copy link
Collaborator

NickEntin commented Jul 30, 2020

For background, this was an intentional decision to not support tolerances from the beginning. Using non-zero tolerances makes it easy to get false positives and let visual regressions slip through. Small changes (not caught by the tests) can build up, and whoever makes the final change that causes the tests to fail has to address all of the built up problems.

macOS Catalina changed the rendering process on the simulator from software rendering to GPU-based rendering (with Metal). Unfortunately this introduced some variance in how snapshot images are generated across different machines (see uber/ios-snapshot-test-case#109), which is causing snapshot tests to fail. Currently the only known successful workaround is to increase the pixel matching tolerance.

For the snapshots of individual frames (SnapshotVerify(animation:on:at:) and associated methods), it's quite straightforward to implement tolerances, since it calls through to FBSnapshotVerifyView. There's still an open question of what exactly supporting tolerances with the animated snapshots should look like though. For single frames, there are two tolerance parameters: the perPixelTolerance, which measures how different an individual pixel can be without the framework considering it to have changed, and the overallTolerance, which measures what portion of pixels in the image can be different (based on the per-pixel determination) without the framework considering the image to have changed. With the animated snapshot images, there's a third dimension to it - the number of frames in the image that have changed. I could see this coming together in a few different ways, for example:

  • overallTolerance applies equally to each frame in the image. If any of the frame surpass this tolerance, the comparison fails.
  • overallTolerance applies to the image as a whole. In other words, instead of comparing the image frame-by-frame, we add up the number of different pixels across all frames and divide that by the total number of pixels in all frames.
  • overallTolerance gets split into two different measurements, something like frameTolerance and sequenceTolerance. These would measure the portion of differing pixels in each frame and the number of differing frames in the sequence respectively.

We should do some experimentation to optimize for the approach that resolves the GPU-based rendering problem while catching as minor regressions as possible. It sounds like adjusting the perPixelTolerance is the main solution for addressing the GPU problem, so we might be able to only introduce that one for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Request for a new feature or improvement to an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants