-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add label to EvaluateResult #13
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check my comment?
python/sklearn-digits/app.py
Outdated
@@ -97,8 +97,9 @@ def evaluate(self, file_path: str) -> Tuple[EvaluateResult, List[EvaluateResultD | |||
label_predict.append(result.label) | |||
|
|||
accuracy = accuracy_score(label_gold, label_predict) | |||
p_r_f = precision_recall_fscore_support(label_gold, label_predict) | |||
res = EvaluateResult(num, accuracy, p_r_f[0].tolist(), p_r_f[1].tolist(), p_r_f[2].tolist(), {}) | |||
uniq_labels = sorted(list(set(label_gold))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it work? I think the label order must not be sorted since we need to correspond them to the output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. I removed sorted
, so please check it again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check it again?
python/sklearn-digits/app.py
Outdated
@@ -97,8 +97,9 @@ def evaluate(self, file_path: str) -> Tuple[EvaluateResult, List[EvaluateResultD | |||
label_predict.append(result.label) | |||
|
|||
accuracy = accuracy_score(label_gold, label_predict) | |||
p_r_f = precision_recall_fscore_support(label_gold, label_predict) | |||
res = EvaluateResult(num, accuracy, p_r_f[0].tolist(), p_r_f[1].tolist(), p_r_f[2].tolist(), {}) | |||
uniq_labels = list(set(label_gold)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this label's order correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this sample, we need to determine the labels by hand to fit the gold label.
Example
labels = ["0","1","2","3","4","5","6","7","8","9"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @yuki-mt
LGTM
What is this PR for?
added label to
EvaluationResult
to identify which precision, recall and fvalue correspond to which labels.This PR includes
What type of PR is it?
Feature/Bugfix/....
What is the issue?
N/A
How should this be tested?