-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] User-space memory cache for preheating jobs #3742
Comments
Actions:
|
TestDescriptionI tested the new feature locally, running dfget on the seed peer to download the same file.
SettingsPeers:
Target File:
Test CasesCase 1
Case 2
ResultWith Cache
Without Cache
Video linksWith Cache: |
Feature request:
Currently, Dragonfly downloads data directly to disk when processing preheat tasks. To enhance performance and reduce latency, I propose introducing a caching mechanism. This will optimize both download and upload efficiency by writing data to both memory and disk, allowing faster access from memory while ensuring data persistence on disk. Specifically, the caching mechanism will work as follows:
This approach aims to reduce disk IO, improve overall system efficiency, and significantly lower the time spent retrieving data from remote peers during preheat tasks.
Use case:
UI Example:
Scope:
Design
Write to Cache
Read from Cache
Definition
Configuration
API Definition
The text was updated successfully, but these errors were encountered: