-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integer convolutions have rounding errors #410
Comments
Currently DSP.jl only has FFT convolution (overlap-save and naive FFT) algorithms using FFTW. The reason the method you note above exists is because FFTW only works with floats, so conversion and rounding is necessary for the available FFT algorithms. I have a PR that has a number of fast direct convolution kernels, but it's stalled out due to my temporary lack of bandwidth and uncertainty about how to select which algorithm to use when there are a decent number of algorithms to pick from (as is the case in my PR). |
Thanks, I understand. Maybe conv should just always return float for now then? At least until an integer convolution method is added. To me that makes more sense than rounding data – obviously the error will still be there, but it will be obvious why. It also avoids the cast to Int64. |
Sorry I don't think I read your proposal carefully enough the first time. I think your points 2 and 3 are clearly the best way forward, and are indeed what I have attempted to do. If I understand your first suggestion, it is to not support integer convolution, and instead ask users to write their own methods for integer convolution because there is rounding involved. I'm not sure that I agree that stripping away that functionality for the sake of absolute correctness is worth it, but it is an interesting suggestion. I would be curious to hear if others would want no answer instead of an approximate answer. Though you're right that this rounding behavior could be more clearly documented, since it is likely surprising. |
I like your suggestion of just returning the floating point result. |
DSP contains the following definition:
The float conversion and subsequent rounding introduces rounding errors when the floating point epsilon near the computed values is greater than 1, even though it returns an array of Int64. For example, the following result is unexpected to me:
With a "naive" direct convolution, there is no error:
The julia FAQ correctly (imo) identifies many reasons why integer overflow behavior is an acceptable error where rounding/truncation is not; it may even be desirable. For julia, I also think it's not very good in terms of type stability, considering the following:
since it always casts to Int, regardless of the input integer element type. FWIW, scipy uses machine integer arithmetic:
There are a couple ways to tackle this:
The text was updated successfully, but these errors were encountered: