Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support combined cluster count and cluster weak lensing #2

Open
NiallMac opened this issue Feb 9, 2018 · 6 comments
Open

Support combined cluster count and cluster weak lensing #2

NiallMac opened this issue Feb 9, 2018 · 6 comments

Comments

@NiallMac
Copy link
Collaborator

NiallMac commented Feb 9, 2018

We want to extend the library/file format to be applicable to combined cluster count and cluster weak lensing. The cluster weak lensing signal is a 2-point function, the SpectrumMeasurement class should work, possibly with some small extensions if we are using DeltaSigma(R) rather than gamma_t(theta)). There are a few extra considerations:

i) Boost factor - a value per tangential shear datapoint. The boost factors have a covariance matrix. Is there covariance between the boost factors and the raw measurements?

ii) Clusters are split into richness bins as well as redshift bins. This does not necessarily require generalizing the SpectrumMeasurement class. We could either

a. Have a separate SpectrumMeasurement instance (i.e. separate extensions at the file level) per richness bin (each containing all the z bin combinations for that richness bin)...

b. Hold all (richness, redshift) bins in one SpectrumMeasurement instance (i.e. the same extension at the file level) e.g. ordered (R_i=richness bin i, zc_i=cluster redshift bin j, zs_k=source redshift bin k):
R_1, zc_1, zs_1
R_1, zc_1, zs_2
...
R_1, zc_2, zs_1
R_1, zc_2, zs_2
...
R_2, zc_1, zs_1
R_2, zc_1, zs_2
etc.
The 'bin1' index would then be R_i * (# of cluster z bins) + zc_j (and the bin2 index would just be the source redshift bin index k as usual).
For this solution, we would probably still want to generalize (make a child class) of SpectrumMeasurement which can translate between bin1 and R_i, zc_j.

iii) We want to include count information. This is just a number (per area?) per richness, cluster z bin. It also needs an accompanying row,column in the covariance matrix.

@danielgruen
Copy link
Collaborator

danielgruen commented Feb 9, 2018 via email

@NiallMac
Copy link
Collaborator Author

@danielgruen Do we need to save n(Lambda) (as in the distribution, not just the total) for each redshift/richness bin? I would have thought this would be required to predict P(M) for that bin e.g. P(M) = \int dlambda P(M | lambda) n(lambda) or something...

@NiallMac
Copy link
Collaborator Author

NiallMac commented Feb 10, 2018

Apparently full n(Lambda) distribution isn't necessary - next question is how to include the redshift selection function information. The redshift selection function for a given cluster bin is just a top-hat in cluster photo-z. However, information from the catalogs (and I think this file should contain all information required from the catalogs) is required to generate P(z_true | photo-z) for the likelihood calculation. How should we store this information? Couple of options:

i) For each cluster bin, store arrays of finely spaced z, mean(sigma_z) and std(sigma_z) (where sigma_z is the RedMapper reported photo-z error, and the mean/std is over all clusters in some finely spaced z bin.

ii) Matteo uses a polynomial fit to sigma_z(z) for each lambda bin - we could store an object that can store the polynomial coefficients.

Both of these options could be accompanied by a function that returns sigma_z(z, cluster bin). One advantage of (ii) is that it will be faster for repeated evaluations e.g. in integrals. But of course one could also fit a polynomial to the arrays stored in option (i).

Thoughts?

@danielgruen
Copy link
Collaborator

danielgruen commented Feb 15, 2018 via email

@NiallMac
Copy link
Collaborator Author

NiallMac commented Feb 15, 2018

Yeh, that is what I went with in the end. There's a get_sigma_z function that at least removes ambiguity for users of an already made file. But yeh the ordering on input is fairly ambiguous, I guess this just needs to be documented this very clearly...

@danielgruen
Copy link
Collaborator

danielgruen commented Feb 15, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants