-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Performance and Fairness Evaluations to Jupyter Notebook #54
Conversation
Thank you, Sara, for completing the performance and fairness evaluation! As requested, I looked at the Statistical Parity Difference in the final section of the notebook and noticed some errors within the formulas. I did the following: To calculate the SPD_predicted, we need to use the predicted outcomes of both the men and women, respectively, and find the difference between them.
Furthermore, to calculate the SPD_actual, we have to use the respective variables of all the actual outcomes of both the men and women.
Finally, we can calculate the SPD_predicted and the SPD_actual using the following:
I have reviewed the rest of the notebook and fixed minute errors and bugs, but overall, the work is excellent! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work! LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed some bugs related to recall and confusion matrix results. I think we're ready to merge!
Performance Evaluation
I performed performance evaluation on the test dataset by
Gender
) for each gender to analyze the data.actual
andpredicted
for each gender, and checking the accuracy score and confusion matrix for each, as well as the True Positive Rate (Recall).Good Candidates
for each gender.Fairness Evaluation
I performed fairness evaluation on the test dataset by