Replies: 1 comment
-
The best approach is to "score" the different options and use that for ranking. There is past work on this, see e.g. https://arxiv.org/abs/2003.06713 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a task in which we have a query : "What is the best way to lose weight" and I have different candidate passages. Based on coarse retrieval score we create positive (where there is some relevance above a threshold) and negative examples (very low relevance) and we train a BERT like model with sentence A/ sentence B format Next Sentence Prediction task. As the model can emit a score between 0 and 1, I can always use the score for ranking.
However in t5 if I have the Input like this
Input: Question"What is the best way to lose weight" Response "Using Keto Diet"
Output: 1
Input: Question"What is the best way to lose weight" Response "Barack Obama"
Output: 0
How exactly do we ensure that the outputs are scores between 0 and 1? Is there some other way to handle this problem through t5?
Beta Was this translation helpful? Give feedback.
All reactions