-
Basically I'd like to calculate a convolution (which could use a Gaussian kernel or a tophat one, which would be equivalent to a rolling mean) over some data that I have. However, instead of specifying the number of points used for the convolution (or In [11]: import numpy as np
In [12]: import xarray as xr
In [13]: da = xr.DataArray(np.random.randn(30), dims=["x"], coords=dict(x=np.logspace(0, 2, 30)))
In [14]: da
Out[14]:
<xarray.DataArray (x: 30)>
array([-0.44477985, -1.31223501, 0.06750107, -0.87812805, -1.69879256,
0.27442237, -1.10400604, 0.2028317 , -1.22788433, -0.50962412,
-0.98329938, 1.05594705, -1.5718489 , -0.04504426, -1.80016001,
-0.21303616, -0.60341888, -0.4295976 , 0.21796551, -0.02088305,
-0.08396128, 0.52999041, -1.12443824, 0.12394816, 2.90654089,
1.02446384, 0.33422938, -0.4752087 , -0.45542943, 1.41001154])
Coordinates:
* x (x) float64 1.0 1.172 1.374 1.61 1.887 ... 62.1 72.79 85.32 100.0
In [15]: da.x.diff("x")
Out[15]:
<xarray.DataArray 'x' (x: 29)>
array([ 0.1721023 , 0.2017215 , 0.23643823, 0.27712979, 0.32482447,
0.38072751, 0.44625158, 0.52305251, 0.61307105, 0.71858198,
0.84225159, 0.98720503, 1.15710528, 1.35624576, 1.58965877,
1.86324269, 2.18391104, 2.55976715, 3.00030896, 3.51666902,
4.12189584, 4.83128358, 5.66275859, 6.63733235, 7.7796325 ,
9.11852513, 10.68784425, 12.5272468 , 14.68321476])
Coordinates:
* x (x) float64 1.172 1.374 1.61 1.887 2.212 ... 62.1 72.79 85.32 100.0 If we interpret x as being a distance coordinate in meters, it'd make little physical sense to specify a fixed number of points. Instead it would be much more helpful if I could express my kernel distance directly in meters, and the points used to calculate the convolution in each location would be different. For example, a 10 m wide kernel on the data above would use several points around Is that possible to do efficiently the current tools we have? At the moment my workaround for these cases has been to interpolate the data to a hi-res uniform grid before calling |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
xref #3216 I am not aware of any helper libraries here but my approach would be to construct a new sparse array of weights and use This array of weights will be banded, so there's a potential optimization there. Please do share any code you come up with. |
Beta Was this translation helpful? Give feedback.
Thanks for the clarifications! The code is now working properly, and I included a couple of bells and whistles. I haven't pursued any optimization yet, but I know what to try now based on
opt_einsum
(although I'm not sure I can apply that effectively since the proper way to do things is to useintegrate()
).Here's a snippet with an example: