-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add probabilistic versions of some regularizers #30
Comments
Made some overdue reinfrastructure work in StationaryRandomFields.jl to make it happen. |
We may want to consolidate VLBIImagePriors and VLBISkyRegularizers then. A lot of this overlaps between what is implemented in VLBIImagePriors. For instance, we already have L2 and TSV implemented there for a variety of base distributions including something similar to an exponential and a t-distribution. Additionally, you do need to be careful about what you call probabilistic. For example, TSV is not a proper prior since it is invariant under a total image scaling. I am a little concerned about faking the rand call. For the wavelet is actually sounds really easy to give that a probabilistic interpretation. We don't even need to make a prior in that case, we can just define a function that does the wavelet transform and just call that in the skymodel function. For instance function sky(params, meta)
(; img) = params
img_trans = wavelet_transform(img)
m = ContinuousImage(img_trans, DeltaPulse())
return m
end
prior = (
img = MvLaplace(nx, NY), # We could implement this?
) |
Yeah, I would note that we don't have TSV but L1 and L2 are being implemented in StationaryRandomFields.jl as well. My preference is that for those that are already implemented to be covered by VLBIImagePriors or StationaryRandomFields.jl, we will internally call those but still this package provides an RML-like front end. Perhaps Documentation should suggest that a simpler way is possible for some priors with the existing Comrade.jl.
That is a great point. If we follows the terminology of Distributions.jl, how about calling TSV-like prior as "Sampleable" rather than "Probabilistic"?
I don't have an immediate resolution as I believe this is required for Comrade ecosystem. Maybe we should show warning message for this kind of rand-facked regularizers?
That's an interesting point for the wavelet L1 implementation. I think it is possible. It can be done with something like this with the development version (not in the main branch now as I wrote in the issue) of the StationaryRandomFields.jl. using Distributions
using StationaryRandomFields
img = UnivariateLaplaceRandomField((nx, NY))
# dist can be specified if you change the scale parameter
img = UnivariateLaplaceRandomField((nx, NY), dist=Laplace(0, 1/hyperparameter)) |
@ptiede @anilipour
I'm thinking about the possibility of adding more probabilistic versions of some regularizers. For instance,
e.g. (X1 - X2)^2 + (X2 - X3)^2 = (x1, x2, x3) C (x1, x2, x3)^T where C = ( (1, -2, 0), (0, 2, -2), (0, 0, 1) )
It may be interesting to add full probabilistic versions for these regularizers (not replacing at this moment). By doing that we have extra benefits including
On the other hand, isotropic TV and MEM do not have a good analytic probabilistic counterpart. Also a sum of multiple regularizations will not have a probabilistic counterpart (l2+TSV would be a special case). This can be handled by adding a trait. See an example here in
ComradeBase.jl
. You can do by setting up traitsJust in a similar way with ComradeBase modifiers, you can define the operator, for instance, any operators (
+
,-
) between regularizers make the resultant output asNotProbabilistic
(the only exception is L2 and TSV). On the other hand, the multiplication of a factor (*
,-
) will just scale the probabilistic function, and therefore it won't change the probabilistic nature.What do you think?
The text was updated successfully, but these errors were encountered: