You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ikey Doherty
i still suspect we want a better unpack routine
that would potentially speed up unpackContent
by using predetermined zstd buffer sizes
and avoid any GC
basically read from mmap, decompress into statically allocated buffer, dump direct to file
rather than the data-payload supporting usecase we have right now which is very chunk oriented
and readahead-esque
at least it works
we can optimise that at a later date imo
(low hanging fruit for anyone to take though)
like make ReaderToken have an API to unpack() directly and make reader/package.d use that ermo
Given what you just went through, I think calling it "low hanging fruit" might ... not be what it is? Ikey Doherty
ive done the heavy lifting
its basically gonna be more like a copy of readChunk where it loops on the input file
(well, input blocks)
and directly dumps without allocating to cachedStorage
keep optimimal block sizes and avoid allocations
win-win
(given that the overhead of zstd is something like 131mb per thread now)
The text was updated successfully, but these errors were encountered:
ermo
changed the title
Optimised readers and writers
Optimise readers and writers
Feb 17, 2023
ermo
changed the title
Optimise readers and writers
Optimise unpack routine
Feb 17, 2023
The text was updated successfully, but these errors were encountered: