Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse KNNVectorFieldData for reduce disk usage #1571

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

luyuncheng
Copy link
Collaborator

@luyuncheng luyuncheng commented Mar 20, 2024

Description

in some scenarios, we want to reduce the disk usage and io throughput for the source field. so, we would excludes knn fields in mapping which do not store the source like( this would make knn field can not be retrieve and rebuild)

"mappings": { 
  "_source": { 
    "excludes": [
      "target_field1",
      "target_field2",
     ]
  }
}

so I propose to use doc_values field for the vector fields. like:

POST some_index/_search
{
  "docvalue_fields": [
    "vector_field1",
    "vector_field2",
  ],
  "_source": false
}'

Proposal

  1. Rewrite KNNVectorDVLeafFieldData get data from docvalues

i rewrite KNNVectorDVLeafFieldData and make KNN80BinaryDocValues can return the specific knn docvalue_fields like: (vector_field1 is knn field type)

"hits":[{"_index":"test","_id":"1","_score":1.0,"fields":{"vector_field1":["1.5","2.5"]}},{"_index":"test","_id":"2","_score":1.0,"fields":{"vector_field1":["2.5","1.5"]}}]

optimize result:
1m SIFT dataset, 1 shard,
with source store: 1389MB
without source store: 1055MB(-24%)

for the continues dive in to knndocvalues fields, I think when use faiss engine, we can use reconstruct_n interface to retrieve the specific doc values and save the disk usage for BinaryDocValuesFormat. or like this issue comments for redesign a KnnVectorsFormat

  1. composite vector field to _source

I added KNNFetchSubPhase and add a processor like FetchSourcePhase#FetchSubPhaseProcessor to combine the docvalue_fields into _source something like synthetic logic

Issues Resolved

#1087
#1572

  • 1st I made KNNVectorDVLeafFieldData can return the vectorDocValue fields like script do.
  • 2nd I write a KNNFetchSubPhase class which add a process in fetch phase, and it can fulfill the _source with 1st step docValues fields response. and this way something like synthetic source but need explicit add value from search body like docvalue_fields

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed as per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@luyuncheng
Copy link
Collaborator Author

@navneet1v Easy test:

  1. create index with
 {"mappings":{"_source": {"excludes": ["vector_field1"] }, "properties": {"vector_field1": {"type": "knn_vector", "dimension": 2 }, "vector_field2": {"type": "knn_vector", "dimension": 4 }, "number_field": {"type": "long"} } } }
  1. write data with
{"vector_field1" : [1.5, 2.5], "vector_field2" : [1.0, 2.0, 3.0, 4.0], "number_field":10 }
  1. POST test/_search
response do not contain vector_field1
  1. POST test/_search
{"docvalue_fields": ["vector_field1"] }
response contains vector_field1 in _source and in fields

@navneet1v
Copy link
Collaborator

@navneet1v Easy test:

  1. create index with
 {"mappings":{"_source": {"excludes": ["vector_field1"] }, "properties": {"vector_field1": {"type": "knn_vector", "dimension": 2 }, "vector_field2": {"type": "knn_vector", "dimension": 4 }, "number_field": {"type": "long"} } } }
  1. write data with
{"vector_field1" : [1.5, 2.5], "vector_field2" : [1.0, 2.0, 3.0, 4.0], "number_field":10 }
  1. POST test/_search
response do not contain vector_field1
  1. POST test/_search
{"docvalue_fields": ["vector_field1"] }
response contains vector_field1 in _source and in fields

My question was how we are ensuring that KNNSubphase is not running during the search and running only during the re-indexing.

@luyuncheng
Copy link
Collaborator Author

My question was how we are ensuring that KNNSubphase is not running during the search and running only during the re-indexing.

@navneet1v gotcha, I will do the continues tests like reindex and other scenarios.

@navneet1v
Copy link
Collaborator

My question was how we are ensuring that KNNSubphase is not running during the search and running only during the re-indexing.

@navneet1v gotcha, I will do the continues tests like reindex and other scenarios.

Also there is something called as _recovery_source which is added as a fallback to support the re-indexing. If you are testing locally I would recommend to remove these line of code
https://github.com/opensearch-project/OpenSearch/blob/e6975e412b09a8d82675edd9a43c20f7c325c0f9/server/src/main/java/org/opensearch/index/mapper/SourceFieldMapper.java#L215-L219

To ensure that recovery source is never created. This recovery source gets deleted after some when if indexing is happening continuously, but I have never tested this to understand does this really happen or not.

@luyuncheng
Copy link
Collaborator Author

My question was how we are ensuring that KNNSubphase is not running during the search and running only during the re-indexing.

@navneet1v i tested search source and reindex scenarios with KNNSubPhase, it shows correctly without nested field.

@luyuncheng
Copy link
Collaborator Author

luyuncheng commented Apr 7, 2024

Why cant we have public BytesRef nextValue() throws IOException { return the whole string represetation of the current vector?

@jmazanec15 as I see, we say synthetic source field is type XContentType.JSON.xContent . but SortedBinaryDocValues#BytesRef nexValue() is bin bytes array. I am not sure we need trans the bin bytes in DocValuesFormat to utf-8 JSON like format bytes string.

also, I see we are trying to reconstruct the vector format with KnnVectorsFormat. so I think in KnnVectorsFormat we can simply rewrite docvalues with JSON.xContent format.

but I am not sure Which SortedBinaryDocValues#BytesRef nexValue() format is better(bin, or one double value).

what do you think which is better.

@luyuncheng
Copy link
Collaborator Author

luyuncheng commented Apr 7, 2024

Also there is something called as _recovery_source which is added as a fallback to support the re-indexing.

@navneet1v I added IT tests for the search, and reindex scenarios, I think it works with knnFetchSubPhase to synthesize the _source field

@bugmakerrrrrr
Copy link
Contributor

The order of these phases cannot be changed.

@navneet1v Indeed, this is the key point that I want to emphasize, and it is precisely why I suggest that we consider incorporating the filter logic that you mentioned in your comment into the KNNFetchSubPhase. Otherwise, it will cause conflicts at the API level (I requested to exclude certain fields in the response, but they appeared in the response). Or if it is too complex to implement the filter logic, we can consider it as a limitation and clearly mark it in the document.

@luyuncheng
Copy link
Collaborator Author

The order of these phases cannot be changed.

@navneet1v Indeed, this is the key point that I want to emphasize, and it is precisely why I suggest that we consider incorporating the filter logic that you mentioned in your comment into the KNNFetchSubPhase. Otherwise, it will cause conflicts at the API level (I requested to exclude certain fields in the response, but they appeared in the response). Or if it is too complex to implement the filter logic, we can consider it as a limitation and clearly mark it in the document.

LGTM, i like it.

@navneet1v
Copy link
Collaborator

The order of these phases cannot be changed.

@navneet1v Indeed, this is the key point that I want to emphasize, and it is precisely why I suggest that we consider incorporating the filter logic that you mentioned in your comment into the KNNFetchSubPhase. Otherwise, it will cause conflicts at the API level (I requested to exclude certain fields in the response, but they appeared in the response). Or if it is too complex to implement the filter logic, we can consider it as a limitation and clearly mark it in the document.

@luyuncheng and @bugmakerrrrrr agreed.

Copy link
Contributor

@bugmakerrrrrr bugmakerrrrrr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@navneet1v @luyuncheng I've checked the filter logic in FetchSourcePhase, and I think that it's too complicated to implement in this subphase.

@navneet1v
Copy link
Collaborator

@luyuncheng can we fix up the comments and so that we can merge this change?

Signed-off-by: luyuncheng <[email protected]>
@luyuncheng
Copy link
Collaborator Author

@luyuncheng can we fix up the comments and so that we can merge this change?

@navneet1v FIXED at 2a61fcd

@jmazanec15
Copy link
Member

Thanks @luyuncheng. I have been reviewing and I think overall it looks good. Im still not confident on the nested portion, particularly the innerProcessOneNestedField. Could you add comments or explain more whats happening here?

Also, can we capture a list of known limitations in the issue? Somewhere we can refer to when developing the documentation what can and cannot be done with this feature. When testing, here is what I have:

  1. [nested] passing inner_hits: {} for query results in an exception - inner_hits will not work
  2. [nested] nested field partial doc only have vector and source exclude (from comment)
  3. [non-nested] If source is disabled for the mapping completely, (i.e. "mappings": {"_source": {"enabled": false,"recovery_source_enabled": false} synthetic source will not work. So, in order for synthetic source to work, the vector fields need to be excluded explicitly

Also, do we know if it works with partially constructed non-nested documents? Are there any other limitations for non-nested case?

The functionality that will work:

  1. Basic search when vector field is excluded from source, the vectors will be included
  2. Basic nested search, the vectors will show up
  3. Reindex of non-nested indices
  4. Reindex of nested indices

@jmazanec15
Copy link
Member

@luyuncheng Im going to work on this one a little bit and see if I can add it to 2.18! Will open up a new PR.

@luyuncheng
Copy link
Collaborator Author

@luyuncheng Im going to work on this one a little bit and see if I can add it to 2.18! Will open up a new PR.

@jmazanec15 how about let me create a new PR, and rebase on the master?

@jmazanec15
Copy link
Member

@luyuncheng I rebased and started experimenting with it here: https://github.com/jmazanec15/k-NN-1/commits/vector-synthetic-source/. Please take a look! I think before raising a new PR, itd be best to figure out a couple high level approach big questions - just so we dont end up going back and forth on revisions too much.

Currently, I have a few concerns with implementing synthetic source as a fetch subphase:

  1. The order fetch sub phases are executed in is non-deterministic. So, if there is a feature, say highlighting, that has its own fetch sub phase, the order in which the fetch subphases is processed will determine if the features work together or not. Thus, it will be difficult to ensure robustness
  2. I am not sure if this approach will work with Field Level Security feature or not. Field level security will hide certain fields from users. I am not sure if this approach will circumvent that security mechanism and therefore present a vulnerability. As an initial step, we could just call out that this does not work with field level security, but blocking this explicitly may be somewhat tricky

I was discussing with @cwperks the field level security implementation and I thought it was pretty interesting and a similar strategy might be better for our use case. They implement the onIndexModule.setReaderWrapper() (see https://github.com/opensearch-project/security/blob/main/src/main/java/org/opensearch/security/OpenSearchSecurityPlugin.java#L698). Then, when reading certain fields, they will filter them out. For instance, for source: https://github.com/opensearch-project/security/blob/main/src/main/java/org/opensearch/security/configuration/DlsFlsFilterLeafReader.java#L652-L669. Given that our use case is just the opposite (we want to add fields when they are not present), it seems like this overall approach might make sense and give us a more robust solution that is compatible with a lot of features by default.

The major issue with this, however, is that indexModule.setReaderWrapper() can only be set once (https://github.com/opensearch-project/OpenSearch/blob/main/server/src/main/java/org/opensearch/index/IndexModule.java#L443-L459). Also, in the javadoc, it says "The wrapped reader can filter out document just like delete documents etc. but must not change any term or document content." I might be misinterpreting this - but it would seem like FLS might be breaking this contract (@cwperks am I incorrect here?)

In order for this to work, I think we would need to somehow apply the synthetic source injection before the security fls wrapper does.

That being said, I am wondering if we should put an extension point in OpenSearch core that will allow fields to inject into source here in a similar manner to how FLS security is implemented.

@cwperks
Copy link
Member

cwperks commented Oct 17, 2024

"The wrapped reader can filter out document just like delete documents etc. but must not change any term or document content." I might be misinterpreting this - but it would seem like FLS might be breaking this contract (@cwperks am I incorrect here?)

^ that does appear to be the case. I'd have to dive into the change that introduced that comment to understand the motivation. It looks like its a change from before the fork. The FLS/FieldMasking does not change the stored data, but it does modify the result. In the case of FieldMasking it masks the value returned or with FLS it can choose to exclude fields from the result.

@luyuncheng
Copy link
Collaborator Author

The order fetch sub phases are executed in is non-deterministic. So, if there is a feature, say highlighting, that has its own fetch sub phase, the order in which the fetch subphases is processed will determine if the features work together or not. Thus, it will be difficult to ensure robustness

@jmazanec15 as the following code shows:

https://github.com/opensearch-project/OpenSearch/blob/0419e5d8a5b5327663c09e93feb931281da7b64e/server/src/main/java/org/opensearch/search/SearchModule.java#L1060-L1073

https://github.com/opensearch-project/OpenSearch/blob/0419e5d8a5b5327663c09e93feb931281da7b64e/server/src/main/java/org/opensearch/search/fetch/FetchPhase.java#L198-L211

highlight added before plugin's FetchSubPhase , so FetchSubPhase , there is some limitation in plugins. maybe we can add a explicit synthetic phase after FetchSourcePhase

@luyuncheng
Copy link
Collaborator Author

I am not sure if this approach will work with Field Level Security feature or not. Field level security will hide certain fields from users. I am not sure if this approach will circumvent that security mechanism and therefore present a vulnerability. As an initial step, we could just call out that this does not work with field level security, but blocking this explicitly may be somewhat tricky

@jmazanec15 @cwperks , hey, if we wrapper a SecurityFlsDlsIndexSearcherWrapper in the data node for field security, why not wrapper a fetchPhase in coordinator node, which handle less verification because finally hits is less then collector all docs.

@jmazanec15
Copy link
Member

@luyuncheng I had an alternative approach that I figured might let us cover more cases around fetch. For instance, other processors implementing fetch subphases. Im curious to hear your thoughts on it.

Currently, we already have our own custom codec. What if we created our own custom StoredFieldsFormat. The format would need to be incredibly light weight - it would implement a delegate pattern on the upstream. However, for the StoredFieldsReader (which implements StoredFields), we override document:

    private final BiConsumer<Integer, BytesReference> sourceConsumer;

    @Override
    public void document(int docId, StoredFieldVisitor storedFieldVisitor) throws IOException {
        delegate.document(docId, storedFieldVisitor);
        if (!(storedFieldVisitor instanceof FieldsVisitor)) {
            return;
        }
        sourceConsumer.accept(docId, ((FieldsVisitor) storedFieldVisitor).source());
    }

Then, we can configure the sourceConsumer to manipulate the source via other formats such as doc values reader or vector values reader.

Similarly, in the future, we could think about doing the same on the write side so that we can automatically disable source for vector fields by default.

This approach would allow us to:

  1. Avoid non-deterministic behavior for ordering around fetch subphases by intercepting source at a lower layer
  2. Easier support for FLS (I believe filtering happens in the visitor phase, so we would need to ensure that we are not accidentally adding it back in)
  3. Automatically disable source for vector users without needing them to specify excludes flag. This would let us not have to signal to users to disable source in our docs making a smoother more performant oob experience.

That being said, Im not sure about:

  1. Having dependencies across formats
  2. Casting to FieldsVisitor covers all functionality

Does this approach sound reasonable @navneet1v @shatejas @heemin32?

@luyuncheng
Copy link
Collaborator Author

luyuncheng commented Oct 22, 2024

Currently, we already have our own custom codec. What if we created our own custom StoredFieldsFormat. The format would need to be incredibly light weight - it would implement a delegate pattern on the upstream. However, for the StoredFieldsReader (which implements StoredFields), we override document:

@jmazanec15 let me describe my understand of the usage for a new StoredFieldsFormat.

  1. create new index with mapping, and exclude vector field
  2. Set vector field for store. then using new StoredFieldsFormat to Store knn_vector
  3. Finally Load data from knn file?
PUT vector_index
"mappings": { 
  "_source": { 
    "excludes": [
      "vector_field"
     ]
  },
  "properties": {
     "vector_field": {
         "type": "knn_vector",
         "dimension": 2,
       __"store": true__
     }
  }
}

As my proposal for this PR, i just want to cut the disk usage, and reuse the data in docValues. in the majority case, we do not want to retrieve knn_vector from source.

i like your idea, it can take some advantage as you mentioned, but i do not know how to save the disk usage for a new StoreFieldsFormat

@jmazanec15
Copy link
Member

jmazanec15 commented Oct 22, 2024

i like your idea, it can take some advantage as you mentioned, but i do not know how to save the disk usage for a new StoreFieldsFormat

It would operate in the same way - exclude the vector field from source in the mapping. This would then save on disk. @luyuncheng internally, the "source" is just stored as a stored field in lucene. So, from StoredFieldsReader, we are able to access any stored fields including source.

Then, in the CustomStoredFieldsReader:

    @Override
    public void document(int docId, StoredFieldVisitor storedFieldVisitor) throws IOException {
        delegate.document(docId, storedFieldVisitor);
        if (!(storedFieldVisitor instanceof FieldsVisitor)) {
            return;
        }
        BytesReference originalSource = ((FieldsVisitor) storedFieldVisitor).source()
        BytesReference syntheticVector = getVectorFromDocValuesOrVectorValues(docId, field)
        putSyntheticVectorIntoSource(originalSource, syntheticVector, field);
    }

The FieldsVisitor contains the source that will be returned.

So, this would let us basically modify the source as early as possible. Thus, we would be able to support as many other features relying on source as possible

@jmazanec15
Copy link
Member

As update - I validated that the FetchSubPhase approach will work with field level security on this branch: https://github.com/jmazanec15/k-NN-1/tree/vector-synthetic-source.

So, my only concern remaining with this approach is:

The order fetch sub phases are executed in is non-deterministic. So, if there is a feature, say highlighting, that has its own fetch sub phase, the order in which the fetch subphases is processed will determine if the features work together or not. Thus, it will be difficult to ensure robustness

As @luyuncheng mentioned, we do not need to worry about this for core fetch subphases, but it could be problematic for non-core subphases. Also, Im wondering if there are any features out there that do not read the source via the fetch phase routine.

@luyuncheng
Copy link
Collaborator Author

luyuncheng commented Oct 23, 2024

As update - I validated that the FetchSubPhase approach will work with field level security on this branch: https://github.com/jmazanec15/k-NN-1/tree/vector-synthetic-source.

@jmazanec15 so, when we introduce a new StoredFieldsFormat or NOT it can work for field level security in FetchSubPhase AND also we do need a FetchSubPhase for reindex.

Because stored_fields return like followings, it would not allow us do Reindex from _source

==> CREATE
PUT my-index-000001
{
"mappings": { 
  "_source": { 
    "excludes": [
      "vector_field"
     ]
  },
  "properties": {
     "vector_field": {
         "type": "knn_vector",
         "dimension": 2,
       __"store": true__
     }
  }
}
==> SEARCH
GET my-index-000001/_search
{
  "stored_fields": [ "vector_field"] 
}
==> RESPONSE
{
  "hits": {
    "hits": [
      {
        "_index": "my-index-000001",
        "_id": "1",
        "_source": {
              .....
        },
        "fields": {
          "vector_field": [ .... ]
        }
      }
    ]
  }
}

@jmazanec15
Copy link
Member

@luyuncheng Not sure Im following completely.

Because stored_fields return like followings, it would not allow us do Reindex from _source

I think there is some confusion around Stored Fields from an OpenSearch user perspective and from a Lucene perspective. The _source field is stored in Lucene as a stored field. See here:

// fieldType().name() is "_source"
context.doc().add(new StoredField(fieldType().name(), ref.bytes, ref.offset, ref.length));

So, the "_source" is fetched by calling the StoredFieldsReader.document - See this FieldVisitor.

So, if we implement our own StoredFieldsReader, we have a chance to intercept the "_source" stored field on the Lucene level. FLS does something similar here: https://github.com/opensearch-project/security/blob/main/src/main/java/org/opensearch/security/configuration/DlsFlsFilterLeafReader.java#L89.

So, taking the stored fields approach,

PUT my-index-000001
{
"mappings": { 
  "_source": { 
    "excludes": [
      "vector_field"
     ]
  },
  "properties": {
     "vector_field": {
         "type": "knn_vector",
         "dimension": 2
     }
  }
}

// This would still return vector_field in the source
GET my-index-000001/_search
{
...
}
==> RESPONSE
{
  "hits": {
    "hits": [
      {
        "_index": "my-index-000001",
        "_id": "1",
        "_source": {
              "vector_field": ..
        }
      }
    ]
  }
}

@jmazanec15
Copy link
Member

@luyuncheng This is PoC for what I am talking about: jmazanec15@ac5e3f8.

@luyuncheng
Copy link
Collaborator Author

luyuncheng commented Oct 29, 2024

jmazanec15/k-NN-1@ac5e3f8

@jmazanec15 LGTM, after i reviewed the commits, i really like this idea!!!

this idea can reduce about 1/3 storage, and it would increase the cpu usage for parse json at store field.

@shatejas
Copy link
Collaborator

Does this approach sound reasonable @navneet1v @shatejas @heemin32?

@jmazanec15 From user experience perspective, adding source.exclude and then sending vectors in search results can be conflicting. Can we consider merging read and write together in main while keeping the implementation iterative? Let me know your thoughts on this

Another scenario I can think of (if we do read and write together) is the case where user explicitly mentioned source.excludes : [vector_field] in the mapping. Do we need to make sure we consider that and skip fetching vectors in codec reader?

And lastly, need to test that when user sends source.excludes : [vector_field] in search request it returns the expected results

@shatejas shatejas added autocut and removed autocut labels Nov 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants