You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current approach is to count passing subtests divided by number of known subtests, the same as the default wpt.fyi view. Let's evaluate how well that works, and compare it to other scoring methods.
Desirable properties:
Correlates with implementation quality as judged by web developers
Correlates with implementation completeness as judged by browser engineers
Easy to explain and understand
The options are, along with their wpt.fyi URL query parameter. (Note that the URLs aren't exactly right and include tentative tests, working around web-platform-tests/wpt.fyi#3930 to make comparison possible.)
Total number of tests (the denominator) is easy to explain and understand
Cons:
Fixing a timeout or subtest can cause new failing subtests to appear, reducing the score. (But the effect is smaller than for view=subtest.)
Linking to view=interop would likely cause confusion, as the view is named for the Interop project. (Renaming/aliasing the URL query parameter would address this.)
Total number of tests (the denominator) is easy to explain and understand
Cons:
Fixing a subtest doesn't count unless all subtests pass. (Does not correlate with improvement.)
Similarly, introducing a single failing subtest in a previously passing test has a large effect.
Next steps
Evaluate how well each method corresponds with feature completeness/quality, by taking a random sample of features and listing what the scores would be. Things to consider:
What does the score tend to be for features not supported at all? (Closer to 0 is better.)
What does the score tend to be for features browser engineers and web developers think are complete? (Closer to 100 is better, and below 80 or 90 is bad.)
What does the score tend to be for in-development features? (Exact score is not important, but an even progression is better.)
On https://webstatus.dev/ and feature details pages like https://webstatus.dev/features/dialog we show a test score between 0 and 100% based on WPT results.
The current approach is to count passing subtests divided by number of known subtests, the same as the default wpt.fyi view. Let's evaluate how well that works, and compare it to other scoring methods.
Desirable properties:
The options are, along with their wpt.fyi URL query parameter. (Note that the URLs aren't exactly right and include tentative tests, working around web-platform-tests/wpt.fyi#3930 to make comparison possible.)
Passing subtests (
view=subtest
)This method counts all subtests and
Example: 225 / 258 = 87%
Pros:
Cons:
Partially passing tests (
view=interop
)Example: 105.12 / 109 = 96%
Pros:
Cons:
view=subtest
.)view=interop
would likely cause confusion, as the view is named for the Interop project. (Renaming/aliasing the URL query parameter would address this.)Fully passing tests (
view=test
)Example: 102 / 109 = 94%
Pros:
Cons:
Next steps
Evaluate how well each method corresponds with feature completeness/quality, by taking a random sample of features and listing what the scores would be. Things to consider:
cc @gsnedders @jgraham since we have discussed test scoring many times over the years, most recently in web-platform-tests/rfcs#190.
The text was updated successfully, but these errors were encountered: