You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe the low recall is due to imbalanced labels, but I value recall over precision.
Is there some way to tune the model to increase recall at the cost of precision?
The text was updated successfully, but these errors were encountered:
Unfortunately, I can't think of any straightforward way to increase recall since the model is trained for generation, using token-level cross entropy loss. Perhaps you can try lowering the probability of producing the <arg> token?
Love the paper.
I've tried it on my own closed domain dataset and achieved poor recall.
I believe the low recall is due to imbalanced labels, but I value recall over precision.
Is there some way to tune the model to increase recall at the cost of precision?
The text was updated successfully, but these errors were encountered: