Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually specialize parts of CRC32 implementation to speed them up debug mode #19

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

stackotter
Copy link
Contributor

These changes are related to my debug mode optimisation PR on for swift-png and have to be merged before that PR can be merged.

These optimisations corresponded to about a 20% improvement in debug mode PNG decoding at the time that I implemented them.

Please tag a new version (probably just a patch version?) once this is merged to that I can update my swift-png PR to point to it.

}
}

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out of curiosity, can we get away with a [UInt8] typed overload that calls into the generic Sequence<UInt8> overload?

this ought to be enough to trigger specialization if i remember correctly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is specifically a debug mode optimisation, and I don't think the compiler really does any specialisation in debug mode. Optimising for debug mode and optimising for release mode seem to be completely different beasts. As soon as a generic function is called, even if it's in the standard library, there seems to be a bunch of overhead. Here of course the overhead isn't at the call-site of that function, but it accumulates on every step of reduce due to the generic iterator and stuff.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay

///
/// This manually specialized implementation is much faster in debug mode than the
/// generic implementation, but exactly the same in release mode.
@inlinable public
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be consuming, and should there also be a generic Sequence<UInt8> overload?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a generic overload above (which this manually specified method is kinda a copy of). I don't think consuming would make a difference to debug mode performance, but I do agree that it'd make sense for both the existing method and this method to have.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this type is bitwise copyable, so consuming would not affect performance, it is more a matter of “usage hint”. it’s also a bit simpler to spell the body of the function, as you simply mutate self and return it

please do add a consuming func updated(with message:borrowing some Sequence<UInt8>), somebody (probably myself) will inevitably reach for it

also, since these are performance hooks, i would really prefer if the [UInt8] overloads were prefixed with an underscore. overloading sucks in Swift, and it would be nice if documentation authors (both in this package and downstream) could still refer to update(with:) without having to guess an FNV-1 hash :)

/// Updates the checksum by hashing the provided message into the existing checksum.
@inlinable public mutating
func update(with message:borrowing some Sequence<UInt8>)
{
self.checksum = ~message.reduce(~self.checksum)
{
(state:UInt32, byte:UInt8) in
Self.table[Int.init(UInt8.init(truncatingIfNeeded: state) ^ byte)] ^ state >> 8
let indexByte:UInt8 = UInt8.init(truncatingIfNeeded: state) ^ byte
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since the performance-sensitive caller is going through the [UInt8] overload, do we actually need this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants