You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like to extend SIS with a feature allowing to hash directly an array of field element and also lower-level function that can allow caller-level optimizations. The motivation is that when hashing repeatedly with the same hasher, a very large part of the computation is spent in resetting the hasher. We would like to avoid given that this is not "useful" computation.
Example
// `vec` being whatever array of field element that is short enough to be processed by the keyvarvec []fr.Element{}
// initialize some keyskey:= sis.NewSis(...)
// hash directly the slice. NB: no need to `Reset` the hasher. The function should *only* read// the `degree`, the `logTwoBound` and the `Ag` of the instance. This does not mutate anything// in the key.
digest :=key.HashFieldSlice(vec)
// so if we just repeat it, we get exactly the same result. i.e. digest1 == digest2digest2:= key.HashFieldSlice(vec)
Optimization of the code path
In order to optimize the memory throughput and allocations, the best algorithm would be something like that:
func (r*RSis) HashFieldSlice(vec []fr.Element) []field.Element {
// Preliminary checksnumLimbsPerFr:=divceil(fr.Bits, r.LogTwoBound)
degree:=r.degreeifdegree%numLimbsPerFr!=0&&numLimbsPerFr%degree!=0 {
// With our parameters, we already enforces that everything is a power of two// so this will not be an issue in practice.panic("We should have that either `numLimbsPerFr` divides `degree` or the converse")
}
// When we are here, there are two possible situations. //// - Either, the number of limbs per field element is larger than the degree :// in that case, one field element is encoded by several polynomials// - Or, the number of limbs per field element is smaller than the degree:// in that case, each polynomials encodes more than one input field element//// To harmonize everything, we can use a temporary *short* buffer which can be// used to encode enough limbs for at least one polynomial and at least one input field// element.// bufferSize:=max(numLimbs, degree)
numFrPerBuffer:=bufferSize/numLimbsPerFrnumPolPerBuffer:=bufferSize/degreebufferLimbs:=make([]field.Element, numLimbsPerFr*numFrPerBuffer)
bufferNullity:=make([]bool, numFrPerBuffer) // tracks if the inputs are zero// Make a shallow copy of the input vector that we are going to progressively draintoHash:=vecvarbufferFr []fr.Element// Preallocate the resultresult:=make([]field.Element, degree)
// NB: this loop potentially leaves some more inputs to be hashedforlen(toHash) >numFrPerBuffer {
// Drain toHash, accounting for the fact that the length of toHash might be non// divisible by `numFrPerBuffer`. If not, there will an optional last step after the// where we hash the remaining things.bufferFr, toHash:=toHash[:numFrPerBuffer], toHash[numFrPerBuffer:]
// Decompose into the preallocated bufferlimbDecomposeFr(bufferLimbs, bufferFr, r.LogTwoBound, r.Degree, bufNullity)
fori:=0; i<numPolsPerBuffer {
// As before ..// - FFT on coset// - Multiply by `Ag` and accumulate into `result`
}
}
// Getting this out of the loop allows removing a `check` in the runtimeiflen(toHash) >0 {
// Or just zeroize the last value of the buffer. But that should not matter// in practice.bufferFr=make([]field.Element, numFrPerBuffer)
copy(bufferFr, toHash)
// And same thing as in the main loop
}
// Do the FFT inverse over resultr.Domain.FFTInverse(result, fft.DIT, fft.OnCoset(), fft.WithNbTasks(1)) // -> reduces mod Xᵈ+1// And return it directlyreturnresult
}
The text was updated successfully, but these errors were encountered:
Description
Stateless hashing
We would like to extend SIS with a feature allowing to hash directly an array of field element and also lower-level function that can allow caller-level optimizations. The motivation is that when hashing repeatedly with the same hasher, a very large part of the computation is spent in resetting the hasher. We would like to avoid given that this is not "useful" computation.
Example
Optimization of the code path
In order to optimize the memory throughput and allocations, the best algorithm would be something like that:
The text was updated successfully, but these errors were encountered: