Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Obtain Evaluation Metrics Per Forest Region #13

Open
yihshe opened this issue Jan 14, 2025 · 2 comments
Open

How to Obtain Evaluation Metrics Per Forest Region #13

yihshe opened this issue Jan 14, 2025 · 2 comments

Comments

@yihshe
Copy link

yihshe commented Jan 14, 2025

Hello! I’m wondering how to evaluate the results for each forest region individually. Currently, evaluation_stats_FOR.py only returns a single set of evaluation metrics for all the forest plots. Is it possible to use this script to calculate metrics for all the plots within each forest region, as displayed in Table 10 of the paper, or has it been implemented somewhere? Thank you for your help!

Another related question: It seems the returned metrics are primarily oAcc, mAcc, IoU, and mIoU. How can I obtain additional metrics such as F-score, Cov, and the number of detected trees for each forest plot, as shown in Table 6?

@bxiang233
Copy link
Collaborator

Hello! I’m wondering how to evaluate the results for each forest region individually. Currently, evaluation_stats_FOR.py only returns a single set of evaluation metrics for all the forest plots. Is it possible to use this script to calculate metrics for all the plots within each forest region, as displayed in Table 10 of the paper, or has it been implemented somewhere? Thank you for your help!

Another related question: It seems the returned metrics are primarily oAcc, mAcc, IoU, and mIoU. How can I obtain additional metrics such as F-score, Cov, and the number of detected trees for each forest plot, as shown in Table 6?

Hi, thank you for your interest! Since I’ve changed my workplace and worked on other projects in between, I’ll need some time to locate the previous code... However, the issues you mentioned are relatively straightforward to address:

  1. Evaluating metrics for each forest region: For the first question, you can put the output files from the same forest into a single folder and then use the existing evaluation script to calculate metrics for each forest region individually.

  2. Additional evaluation metrics: Regarding F-score and Cov, these are actually included in the final evaluation file, but sorry the naming might have caused some confusion. The "Instance Segmentation MUCov" corresponds to Cov, and the "Instance Segmentation F1 score" corresponds to the F-score. As for the number of detected trees, you could try to print the true_positive value to get that value.

@yihshe
Copy link
Author

yihshe commented Jan 20, 2025

Thank you for your reply- that's been very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants