You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tl;dr - make sure you're actually testing with the same settings and follow the advice in the variance docs
You're probably here because you filed an issue wondering why metrics or performance results were different between two different runs. We are deeply appreciative of your effort to improve Lighthouse by letting us know!
First, check that you're actually testing with the same settings, especially if you're using a non-default profile such as desktop. Different channels have different ways of configuring throttling. Look carefully at the available CLI flags and consider using the lr-desktop-config.js if you're trying to match the DevTools or PageSpeed Insights desktop profile. Assuming all settings are identical, the remaining differences are likely due to variability.
Performance variability of webpages is a very challenging topic, especially if Lighthouse is one of you or your client's first experiences with performance measurements. We've documented all of the most common sources of performance variability in our variance docs as well as steps you can take to limit its impact.
Lighthouse has a few internal mechanisms that attempt to limit the impact of variability such as simulated throttling, CPU / Memory Power estimation in the runtime section of the report, and companion projects like lighthouse-ci that can automatically run Lighthouse many times. Ultimately though, there is only so much that can be done from within Lighthouse itself and the eventual results rely on the stability of the environment in which Lighthouse is run (as well as the particular URL you're testing!).
Comparing Lighthouse Results From Different Environments
Inevitably, you or your clients will try to compare Lighthouse results from different environments (PSI vs. local, your office machine vs. your home machine, etc). These results may be systematically different in a way that never aligns due to the same underlying variance factors described in our documentation. We highly recommend trying to benchmark in a consistent environment, but we understand this is going to happen. If you must compare across multiple, pick the environment you'll measure yourself against as the standard, and then calibrate other environments to match as closely as possible using throttling settings. We are working on ways to make this process a little easier (e.g. #9085).
The text was updated successfully, but these errors were encountered:
tl;dr - make sure you're actually testing with the same settings and follow the advice in the variance docs
You're probably here because you filed an issue wondering why metrics or performance results were different between two different runs. We are deeply appreciative of your effort to improve Lighthouse by letting us know!
First, check that you're actually testing with the same settings, especially if you're using a non-default profile such as desktop. Different channels have different ways of configuring throttling. Look carefully at the available CLI flags and consider using the
lr-desktop-config.js
if you're trying to match the DevTools or PageSpeed Insights desktop profile. Assuming all settings are identical, the remaining differences are likely due to variability.Performance variability of webpages is a very challenging topic, especially if Lighthouse is one of you or your client's first experiences with performance measurements. We've documented all of the most common sources of performance variability in our variance docs as well as steps you can take to limit its impact.
Lighthouse has a few internal mechanisms that attempt to limit the impact of variability such as simulated throttling,
CPU / Memory Power
estimation in the runtime section of the report, and companion projects like lighthouse-ci that can automatically run Lighthouse many times. Ultimately though, there is only so much that can be done from within Lighthouse itself and the eventual results rely on the stability of the environment in which Lighthouse is run (as well as the particular URL you're testing!).Comparing Lighthouse Results From Different Environments
Inevitably, you or your clients will try to compare Lighthouse results from different environments (PSI vs. local, your office machine vs. your home machine, etc). These results may be systematically different in a way that never aligns due to the same underlying variance factors described in our documentation. We highly recommend trying to benchmark in a consistent environment, but we understand this is going to happen. If you must compare across multiple, pick the environment you'll measure yourself against as the standard, and then calibrate other environments to match as closely as possible using throttling settings. We are working on ways to make this process a little easier (e.g. #9085).
The text was updated successfully, but these errors were encountered: