Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epic: Tracking accuracy #724

Open
ErikBjare opened this issue Mar 19, 2022 · 0 comments
Open

Epic: Tracking accuracy #724

ErikBjare opened this issue Mar 19, 2022 · 0 comments

Comments

@ErikBjare
Copy link
Member

ErikBjare commented Mar 19, 2022

This issue is meant to track accuracy issues present in ActivityWatch.

Inaccuracies can arise in two places: during collection, and during analysis.

Collection

Bugs in how events are sent, queued, and merged can affect the data and lead to inconsistencies like unexpected gaps and overlaps.

Inaccuracies introduced by bugs during collection are taken very seriously, as they may lead to unrecoverable loss of usable
tracking data. However, spurious behavior and difficulty in testing well presents difficulties in ensuring things are bug-free.

Some inaccuracies arise from assumptions made during collection, such as setting constants for polling intervals, AFK timeouts, etc. These should be intelligently set to minimize inaccuracies and make a good tradeoff between data detail and space/compute requirements.

Known issues

  • There are occurrences where events are duplicated, often then keep getting heartbeats such that there are two events with the same start and data but different durations. See here: Warning: gap of negative duration #239 (comment)
    • May downstream lead to warnings about overlapping events and negative gaps.
  • Sometimes, there are weird occurrences of very long-stretching events passing multiple others, such as in this screenshot: Negative time on Activity view #602 (comment)
    • Cause unknown

Analysis

Bugs in transforms and queries may lead to analysis issues. However, these bugs are generally considered less critical since as long as the underlying data is correct, the analysis can always be corrected later. However, they can still present significant frustration for end-users, who may find the resulting buggy analysis results useless.

Transforms and queries are also easy to test, and the dual-implementation of transforms in both Python and Rust make it a suitable target for cross-implementation testing to ensure robust analysis methods.

Known issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Todo
Development

No branches or pull requests

1 participant