Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add information about confidence #70

Open
moeller0 opened this issue Jan 19, 2024 · 1 comment
Open

Add information about confidence #70

moeller0 opened this issue Jan 19, 2024 · 1 comment
Assignees

Comments

@moeller0
Copy link

The current draft contains the following section:

Confidence of test-results

As described above, a tool running the algorithm typically defines a time-limit for the execution of each of the stages. For example, if the tool allocates a total run-time of 40 seconds, and it executes a full downlink followed by a uplink test, it may allocate 10 seconds to each of the saturation-stages (downlink capacity saturation, downlink responsiveness saturation, uplink capacity saturation, uplink responsiveness saturation).

As the different stages may or may not reach stability, we can define a "confidence score" for the different metrics (capacity and responsiveness) the methodology was able to measure.

We define "Low" confidence in the result if the algorithm was not even able to execute 4 iterations of the specific stage. Meaning, the moving average is not taking the full window into account.

We define "Medium" confidence if the algorithm was able to execute at least 4 iterations, but did not reach stability based on standard deviation tolerance.

We define "High" confidence if the algorithm was able to fully reach stability based on the defined standard deviation tolerance.

It must be noted that depending on the chosen standard deviation tolerance or other parameters of the methodology and the network-environment it may be that a measurement never converges to a stable point. This is expected and part of the dynamic nature of networking and the accompanying measurement inaccuracies. Which is why the importance of imposing a time-limit is so crucial, together with an accurate depiction of the "confidence" the methodology was able to generate.

With the assumption that all clients will report this as part of their normal output. I am not sure whether goresponsiveness currently does that. Maybe we should add this as this is generally good practice in data reporting?

@hawkinsw hawkinsw self-assigned this Jan 19, 2024
@hawkinsw
Copy link
Member

Yes! I will add that!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants