-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
src: avoid allocation in the Size method for BASE64URL and BASE64 #53550
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
For people who care about history, Size has been expensive since the very beginning (we trace it back to the initial commit by @isaacs in 2013): case BASE64: {
String::AsciiValue value(str);
data_size = base64_decoded_size(*value, value.length());
break;
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This comment was marked as outdated.
This comment was marked as outdated.
for the record, Size() wasn't just doing an unnecessary allocation, it was also returning the wrong size, causing us to allocate a much larger buffer than necessary. str->Length() % 4 <= 1 ? str->Length() / 4 * 3
: str->Length() / 4 * 3 + (str->Length() % 4) - 1) which is only an upper bound, as it doesn't look at the actual data, and so it assumes the worst case (all characters are data characters, i.e. no padding or whitespace). the math looks correct to me. bear in mind that while Size() is allowed to return an upper bound, it is expected to return an exact prediction most of the time. if this is not the case, we do another allocation + memcpy to a new backing store with the actual size. for base64url the prediction is usually correct as it rarely has padding, but for base64 it will miss 2/3 of the time (or always, if the input contains whitespace). |
As far as I can tell, Node never returned the exact size. Doing so requires scanning the entire input, checking for characters to discard. |
wasn't |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
PR is currently blocked from landing due to unreliable CI |
This comment was marked as outdated.
This comment was marked as outdated.
@jasnell Indeed. |
@lemire can you rebase and force-push if you don't mind? |
@anonrig I synced. |
This comment was marked as outdated.
This comment was marked as outdated.
@anonrig Looks like it is turning green... what did you do???? ❤️ |
@anonrig It still won't complete the tests though. |
@anonrig Stuck. |
It seems that all macOS machines are down/offline at the moment. nodejs/build#3887 |
@anonrig This will never go through, will it? |
This needs a rebase. |
This PR avoids calling simdutf on base64 encodings. It is not obsolete! |
@anonrig It is possible I missed something, if so, let me know. |
For large base64 strings, this PR multiplies the performance of
Buffer.from(..., "base64");
by making the Size function simpler and non-allocating.This will reduce the gap with Bun from being over 3x slower to being about only 40% slower for large inputs. For modest inputs, Node.js is still about 2x slower than Bun. Note that both Bun and Node.js use the same underlying library (simdutf) for base64 decoding so any difference is entirely due to the runtime (not to the base64 decoding per se).
Benchmark (from bun):
Node.js 22:
Node.js with this PR:
Bun canary (upcoming release)
To get the Bun results, you need the canary which you may get with
bun upgrade --canary
.