-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Support for raw sparse vectors input in the neural sparse query #608
Comments
Do you mean the |
Correct, sorry for the confusion. Used the wrong query in my example, probably due to never having used the |
@brusic I changed the title into an accurate one. |
Hi @brusic , our enhancements has been merged now and will be released at 2.14 version. Now users can just use neural sparse query with raw tokens. Sample query:
|
Close this issue as we have finished the feature. Feel free to re-open it if there is more discussion |
Is your feature request related to a problem?
Neural sparse search
Currently the neural search query only accepts the model id alongside the text to be encoded, which requires a model to be registered into a pipeline. The query should also support passing in the vector directly, bypassing the pipeline phase. It can be beneficial for clients to do the encoding for several reasons: ad hoc analysis, unit testing, custom/unsupported models.
What solution would you like?
Accept a vector, similar to knn search
What alternatives have you considered?
rank_features
is a close alternative, but can only rank (boost) other query clauses.Do you have any additional context?
ES will soon have a weighted_tokens query, which is analogous to their text_expansion query.
The text was updated successfully, but these errors were encountered: