-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison of model-fitting errors with input values using simulated images #403
Comments
Sorry this was left unaddressed. We don't currently output the full covariance matrix, this is not that easy as the matrix is based on internal parameters. I'll classify this as a feature request. |
Thanks! |
Hello, I am pinging this issue, because it seems to still not be resolved and it remains very relevant. Indeed, from my experience with making SE++ catalogs on images from JWST, HST, UVISTA, HSC I persistently encounter uncertainty values of the model-fitting parameters that are significantly underestimated. The following figure is an example of JWST/NIRCam fluxes measured on simulated images from SE++, compared with the true input fluxes. The yellow envelope shows the 1sigma scatter of the measured-true values, while the red envelope shows the median of the reported SE++ uncertainties as a function of magnitude. One would expect the two to be largely in agreement if the SE++ uncertainties are accurate. However, there is a large difference that shows that the SE++ are largely underestimated. These uncertainties are then key ingredient in SED fitting codes and in accurately measuring S/N of sources in different bands that are key in dropout selection etc. Has this issue been somehow addressed? Do you maybe have some guidelines of how to go around this? Also, maybe some clearer description of how the uncertainties are derived will be useful in discussing results and figuring out ad-hoc solutions to correct the uncertainties. Thank you for the consideration. |
Hi Marko,
Do the reported uncertainties in aperture photometry better match the observed scatter!?! |
Hello,
I have worked a bit on simulated images with sourcextractor++ and while the model-fitting usually retrieved values close to the parameters used to build the images, the difference between data and model tend to be significantly higher than the sourcextractor++ error-bars.
I used 100 simulated images of size 512px*512px and 0.396 arcsec/pixel angular resolution, with 216 sec exposure time, and 5352 for the gain, in one band (g - SDSS filter) with the following apparent magnitude distribution :
The images were created using Stuff and SkyMaker, with a Sersic profile for the bulge and an exponential profile for the disk (both models are concentric), and with the following parameters :
I used the following configuration for the model-fitting :
Bulge :
Disk :
Below you can see the discrepancies between the "actual error" (x axis) and the error retrieved by sourcextractor++ (y axis) on some parameters. My concern is that most of the galaxies have error_param_srx < param_srx-param_input with up to 2 orders of magnitude between the errors (except for the positional parameters). So I wonder whether the errors are underestimated when I use observed images.
Note that two kinds of images were generated (in pixels or world coordinates). The resulting points are in blue/red. But this makes no difference on the statistics.
Is there a way to access the covariance error terms (I mean the non diagonal elements of the covariance matrix)? For the analysis I am doing, it would be would be useful to see for example the covariance term between the Sersic index and the scale radius (of a Sersic profile), which I expect to be significant.
Thank you for your feedback.
Best regards,
Louis
The text was updated successfully, but these errors were encountered: