We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently, when download recursively, dfget need to list all files in target URL, the result is only in memory.
If too many dfget list all files in same target URL, source has much more pressure.
We need design a mechanism to cache the list result in P2P network.
dfget --recursive d7yfs://domain/path/to/dir
The text was updated successfully, but these errors were encountered:
chore(deps): Bump serde_json from 1.0.134 to 1.0.135 (dragonflyoss#942)
34a0584
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.134 to 1.0.135. - [Release notes](https://github.com/serde-rs/json/releases) - [Commits](serde-rs/json@v1.0.134...v1.0.135) --- updated-dependencies: - dependency-name: serde_json dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
jim3ma
No branches or pull requests
Feature request:
Currently, when download recursively, dfget need to list all files in target URL, the result is only in memory.
If too many dfget list all files in same target URL, source has much more pressure.
We need design a mechanism to cache the list result in P2P network.
Use case:
dfget --recursive d7yfs://domain/path/to/dir
The text was updated successfully, but these errors were encountered: