You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 10, 2024. It is now read-only.
A withdrawn prediction is included in my score in the accuracy box for the range it's in.
First, I made a prediction and judged it true, then unknown and withdrawn. It was my only prediction at the time. I noticed that the withdrawn prediction showed up on my results graph.
To see if the withdrawn prediction would drag down my accuracy score, I did a test. I added another prediction to the same accuracy range and marked it correct. The withdrawn prediction is dragging the accuracy score down to 50%.
Perhaps I don't understand what "withdrawn" means in the context of PredictionBook. If so, then maybe my confusion a sign of a user interface bug. It seems like withdrawn predictions just aren't supposed to do that, though.
Note: The withdrawn prediction always had a 1% probability, and it went into the 90% accuracy box. This was confusing until I realized that one could make the graph smaller by putting all the predictions with probabilities under 50% into the other boxes and then using a reversed interpretation of the judgements for those. I tested this using a prediction with a 30% probability, then marked it false, and it updated the 70% box to add to my accuracy there. It looks like this is the way it's supposed to be, and that seems cool now that I've figured it out. My confusion might be a common experience for new users. It might be worth adding my experience of confusion to a list of things to do UX testing about.
The text was updated successfully, but these errors were encountered:
"I made a prediction and judged it true, then unknown and withdrawn."
updated my stats
reproduced this issue
However it's a duplicate of #100 - the 'bug' aspect here fails when you switch to unknown, rather than when you withdraw it. (so this particular bug can be closed as a duplicate)
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
A withdrawn prediction is included in my score in the accuracy box for the range it's in.
First, I made a prediction and judged it true, then unknown and withdrawn. It was my only prediction at the time. I noticed that the withdrawn prediction showed up on my results graph.
To see if the withdrawn prediction would drag down my accuracy score, I did a test. I added another prediction to the same accuracy range and marked it correct. The withdrawn prediction is dragging the accuracy score down to 50%.
Perhaps I don't understand what "withdrawn" means in the context of PredictionBook. If so, then maybe my confusion a sign of a user interface bug. It seems like withdrawn predictions just aren't supposed to do that, though.
http://predictionbook.com/predictions/180230
Note: The withdrawn prediction always had a 1% probability, and it went into the 90% accuracy box. This was confusing until I realized that one could make the graph smaller by putting all the predictions with probabilities under 50% into the other boxes and then using a reversed interpretation of the judgements for those. I tested this using a prediction with a 30% probability, then marked it false, and it updated the 70% box to add to my accuracy there. It looks like this is the way it's supposed to be, and that seems cool now that I've figured it out. My confusion might be a common experience for new users. It might be worth adding my experience of confusion to a list of things to do UX testing about.
The text was updated successfully, but these errors were encountered: