-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add removeFeed(name|key)
#56
base: master
Are you sure you want to change the base?
Conversation
With the addition of `removeFeed()`, the previous paradigm of always numbering storage directories starting at `0` and incrementing with each feed no longer works as this would have stopped when it encountered a deleted storage's number. Instead now all storage directories are stored in an index file called `dirs` in a storage dir called `index`. The index is updated whenever a feed is added or removed. This index is agnostic to the name of the dirs except for the assumption that there is no commas in the name as the dir names are stored with commas delimiting the names. Directory names are still incremented numbers. To support the fact that they can be removed and so the number of feeds tracked does not match the biggest number used for the directory names, the max number used is tracked and all new feeds are given a number based on the max. In the encryption key storage test in the regression tests, the number of resources was incremented to account for the addition of the index storage.
I added an implementation of the storage index. It feels a bit clunky (especially the legacy support in loading the feeds), so I am open to any suggestions. However, from my testing, it all works. |
removeFeed(name|key)
removeFeed(name|key)
As the previous comment noted, writing `writeStringStorage()` didn't account for writing to a storage with existing data that was longer than the current data. This wasn't a problem until the storage index was implemented. If feeds are removed, then it is very likely that the current data is shorter than previous data. To fix this the storage size is retrieved. If it errors while `stat`ing the file, then its assumed to be non-existent and we just go ahead and write to the file.
This will not update the index file during the loading process, but will allow future updates to the index to include loaded feeds.
To summarize my storage solution, I create a comma separated list of current storage directories (one per feed) in a storage The fact that it falls back to the original sequential format when no index is found, but automatically create an index if the feeds are modified enables legacy support that gracefully upgrades. |
@lejeunerenard This is an exciting set of changes to have in multifeed! Thank you for taking the time to put this together in a PR. 🎈 I've been pretty overloaded lately re: reviewing. Is this a change that you're depending on / using in your own projects, or non-blocking for you? |
@hackergrrl I am not depending on it. There is still an issue in my storage solution that I've also been too busy to find out the bug. I do hope to get to it soon however. Feel free to ignore this PR for now since you are busy and it's not ready. |
The initial `!dir` check in `next()` would bail when loading index `0` which is the first feed in the legacy storage structure. All subsequent feeds would be lost.
It was assumed previously that the max could be loaded whenever a feed was added or removed, but replicated feeds didn't refresh the max dir index before creating a new feed with the default starting index `0`.
An empty string was being converted to `0` and caused the numeric feed names to start at `1` instead of `0`.
Here is an initial stab at implementing
removeFeed()
. It takes aname
or akey
and will remove the feed from the multifeed and destroy it (which clears the storage as well as closes the feed).This implements #19 but without the blacklist mechanism. I felt that this should be a separate mechanism/feature.
Potential Issue
Currently there is one know issue with the way storage works. A storage is created per feed and is named sequentially which is problematic for persistent storage modules. When loading feeds in
_loadFeeds()
, it is assumed the feeds are sequential and that everything has been loaded when it hits a storage that does not load. So if we remove a storage and then reload feeds, it will ignore all feeds after the removed feed.A potential solution is to add another storage instance to store an index of all feeds and defaulting to sequential names for legacy support when the index storage isn't given / loaded.