You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 8, 2020. It is now read-only.
Hi! I found this amazing project and I started to use it a lot 😎 thank you so mucho for creating it.
My question is that if it's possible (somehow) to have one of these in the PR commment:
The results of the production website and the stage deploy to compare
The score's difference between a production url and the stage
Do you think that it's possible to have this information?
To be clear, I'm not asking for a feature. I just want to know if someone else have faced this particular case where I'd like a PR to be "rejected" (ask for improvements) when the stage's score is worse than production's score (I know that I can set a flag, but that's fixed, I want to compare with my own implementation).
Hope I was clear enough! 😊
The text was updated successfully, but these errors were encountered:
Here to +1! I was just about to open an identical issue.
I love the Lighthouse bot but it would be awesome if I could pass an additional URL in the CLI to compare against a real benchmark. Ideally this would add two additional columns to the results table, making the columns:
New score
Old score (from new production URL parameter)
Change
Values would be + or - difference between new and old scores
Required threshold
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi! I found this amazing project and I started to use it a lot 😎 thank you so mucho for creating it.
My question is that if it's possible (somehow) to have one of these in the PR commment:
Do you think that it's possible to have this information?
To be clear, I'm not asking for a feature. I just want to know if someone else have faced this particular case where I'd like a PR to be "rejected" (ask for improvements) when the stage's score is worse than production's score (I know that I can set a flag, but that's fixed, I want to compare with my own implementation).
Hope I was clear enough! 😊
The text was updated successfully, but these errors were encountered: