-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix system lag on copy (WIP) #506
base: master
Are you sure you want to change the base?
Conversation
|
||
getEntryFilename (entry) { | ||
return `${this.REGISTRY_DIR}/${entry.asBytes().hash()}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Omg was this the problem all along?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can also try using just some random string, since the filename is saved in the json anyway.
Edit: nevermind, that would create duplicate entries if you copy the same image twice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not completely sure — I only get lag on copy after several days of uptime, so I end up having a multi-day edit/test cycle.
I'll report back after further testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After testing it some more, copying large images still causes freezes on copy (up to several seconds), and copying text causes short freezes (270ms).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be best to not compute the hash at all (or at least not on the main thread), since the first copy of a large image will always require computing the hash even if it's cached.
Currently the code is removing checks for content, since as the history size grows, every new copy will have to check the entire history to check for duplicates. |
A real fix would probably require having some way to calculate hashes off of the main thread, instead of blocking. |
Not yet fully tested, should fix #428