Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add label to EvaluateResult #13

Merged
merged 4 commits into from
Feb 8, 2019
Merged

Conversation

yuki-mt
Copy link
Member

@yuki-mt yuki-mt commented Feb 7, 2019

What is this PR for?

added label to EvaluationResult to identify which precision, recall and fvalue correspond to which labels.

This PR includes

What type of PR is it?

Feature/Bugfix/....

What is the issue?

N/A

How should this be tested?

@yuki-mt yuki-mt requested a review from keigohtr February 7, 2019 11:21
Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you check my comment?

@@ -97,8 +97,9 @@ def evaluate(self, file_path: str) -> Tuple[EvaluateResult, List[EvaluateResultD
label_predict.append(result.label)

accuracy = accuracy_score(label_gold, label_predict)
p_r_f = precision_recall_fscore_support(label_gold, label_predict)
res = EvaluateResult(num, accuracy, p_r_f[0].tolist(), p_r_f[1].tolist(), p_r_f[2].tolist(), {})
uniq_labels = sorted(list(set(label_gold)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it work? I think the label order must not be sorted since we need to correspond them to the output.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. I removed sorted , so please check it again.

Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you check it again?

@@ -97,8 +97,9 @@ def evaluate(self, file_path: str) -> Tuple[EvaluateResult, List[EvaluateResultD
label_predict.append(result.label)

accuracy = accuracy_score(label_gold, label_predict)
p_r_f = precision_recall_fscore_support(label_gold, label_predict)
res = EvaluateResult(num, accuracy, p_r_f[0].tolist(), p_r_f[1].tolist(), p_r_f[2].tolist(), {})
uniq_labels = list(set(label_gold))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this label's order correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this sample, we need to determine the labels by hand to fit the gold label.

Example

labels = ["0","1","2","3","4","5","6","7","8","9"]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@keigohtr
I explicitly separated label and label_index in 6159e41 .
( label order is based on self.labels )
Please review it

Copy link
Member

@keigohtr keigohtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @yuki-mt
LGTM

@yuki-mt yuki-mt merged commit 5748baf into master Feb 8, 2019
@yuki-mt yuki-mt deleted the feature/add_label_to_metrics branch February 8, 2019 07:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants