-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add label to metrics #34
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check my comment?
rekcurd/utils/__init__.py
Outdated
if num is None: | ||
self.num = 0 | ||
self.accuracy = 0.0 | ||
self.precision = [0.0] | ||
self.recall = [0.0] | ||
self.fvalue = [0.0] | ||
self.option = {} | ||
self.label = [0.0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it OK to set list[float]
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.PredictLabel
accepts List[float]
and IO (grpc message)'s tensor's val is repeated float
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that the return type of label
depends on the user's predictor as I wrote on rekcurd_worker_servicer.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@keigohtr
I removed default None value from EvaluateResult!
Codecov Report
@@ Coverage Diff @@
## master #34 +/- ##
==========================================
+ Coverage 81.17% 81.92% +0.74%
==========================================
Files 14 14
Lines 712 708 -4
==========================================
+ Hits 578 580 +2
+ Misses 134 128 -6
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check it again please?
self.option = option | ||
def __init__(self, num: int, accuracy: float, precision: List[float], | ||
recall: List[float], fvalue: List[float], label: List[PredictLabel], | ||
option: Dict[str, float] = {}): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, I think option
type is just dict
not Dict[str, float]
...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's OK at the moment.
Could you make an issue for this?
Additional metrics must be a general field for ML evaluation. Current spec specifies our use-case, so we need to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did it! #35
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
What is this PR for?
added label to
EvaluationMetrics
to identify which precision, recall and fvalue correspond to which labels.This PR includes
EvaluateResult
classrekcurd_pb2.EvaluationMetrics
What type of PR is it?
Feature
What is the issue?
N/A
How should this be tested?
python -m unittest test.test_dashboard_servicer