Skip to content

Commit

Permalink
Merge branch 'main' into dfs-phase-multithreadedcollectors-wip
Browse files Browse the repository at this point in the history
  • Loading branch information
cbuescher committed Jun 30, 2023
2 parents 33187c2 + 684af2d commit 5ee6598
Show file tree
Hide file tree
Showing 533 changed files with 7,355 additions and 3,148 deletions.
2 changes: 2 additions & 0 deletions .ci/bwcVersions
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ BWC_VERSION:
- "7.17.9"
- "7.17.10"
- "7.17.11"
- "7.17.12"
- "8.0.0"
- "8.0.1"
- "8.1.0"
Expand Down Expand Up @@ -90,5 +91,6 @@ BWC_VERSION:
- "8.8.0"
- "8.8.1"
- "8.8.2"
- "8.8.3"
- "8.9.0"
- "8.10.0"
4 changes: 2 additions & 2 deletions .ci/snapshotBwcVersions
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
BWC_VERSION:
- "7.17.11"
- "8.8.2"
- "7.17.12"
- "8.8.3"
- "8.9.0"
- "8.10.0"
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ public TopDocs benchmark() throws IOException {

private Query scriptScoreQuery(ScoreScript.Factory factory) {
ScoreScript.LeafFactory leafFactory = factory.newFactory(Map.of(), lookup);
return new ScriptScoreQuery(new MatchAllDocsQuery(), null, leafFactory, lookup, null, "test", 0, IndexVersion.CURRENT);
return new ScriptScoreQuery(new MatchAllDocsQuery(), null, leafFactory, lookup, null, "test", 0, IndexVersion.current());
}

private ScoreScript.Factory bareMetalScript() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ private DotBinaryFloatBenchmarkFunction(int dims) {

@Override
public void execute(Consumer<Object> consumer) {
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.CURRENT).dotProduct(queryVector);
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.current()).dotProduct(queryVector);
}
}

Expand Down Expand Up @@ -290,7 +290,7 @@ private CosineBinaryFloatBenchmarkFunction(int dims) {

@Override
public void execute(Consumer<Object> consumer) {
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.CURRENT).cosineSimilarity(queryVector, false);
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.current()).cosineSimilarity(queryVector, false);
}
}

Expand Down Expand Up @@ -338,7 +338,7 @@ private L1BinaryFloatBenchmarkFunction(int dims) {

@Override
public void execute(Consumer<Object> consumer) {
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.CURRENT).l1Norm(queryVector);
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.current()).l1Norm(queryVector);
}
}

Expand Down Expand Up @@ -386,7 +386,7 @@ private L2BinaryFloatBenchmarkFunction(int dims) {

@Override
public void execute(Consumer<Object> consumer) {
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.CURRENT).l1Norm(queryVector);
new BinaryDenseVector(docFloatVector, docVector, dims, IndexVersion.current()).l1Norm(queryVector);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -150,3 +150,6 @@ org.elasticsearch.cluster.service.ClusterService#submitUnbatchedStateUpdateTask(
@defaultMessage Reacting to the published cluster state is an obstruction to batching cluster state tasks which leads to performance and stability bugs. Use the variants that accept a Runnable instead.
org.elasticsearch.cluster.ClusterStateTaskExecutor$TaskContext#success(java.util.function.Consumer)
org.elasticsearch.cluster.ClusterStateTaskExecutor$TaskContext#success(java.util.function.Consumer, org.elasticsearch.cluster.ClusterStateAckListener)

@defaultMessage ClusterState#transportVersions are for internal use only. Use ClusterState#getMinTransportVersion or a different version. See TransportVersion javadocs for more info.
org.elasticsearch.cluster.ClusterState#transportVersions()
5 changes: 5 additions & 0 deletions docs/changelog/93545.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 93545
summary: Improve error message when aggregation doesn't support counter field
area: Aggregations
type: enhancement
issues: []
25 changes: 25 additions & 0 deletions docs/changelog/96161.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,28 @@ area: "Search"
type: enhancement
issues:
- 95541
highlight:
title: Better indexing and search performance under concurrent indexing and search
body: "When a query like a match phrase query or a terms query targets a constant keyword field we can skip\
query execution on shards where the query is rewritten to match no documents. We take advantage of index mappings\
including constant keyword fields and rewrite queries in such a way that, if a constant keyword field does not\
match the value defined in the index mapping, we rewrite the query to match no document. This will result in the\
shard level request to return immediately, before the query is executed on the data node and, as a result, skipping\
the shard completely. Here we leverage the ability to skip shards whenever possible to avoid unnecessary shard\
refreshes and improve query latency (by not doing any search-related I/O). Avoiding such unnecessary shard refreshes\
improves query latency since the search thread does not need to wait anymore for unnecessary shard refreshes. Shards\
not matching the query criteria will remain in a search-idle state and indexing throughput will not be negatively\
affected by a refresh. Before introducing this change a query hitting multiple shards, including those with no\
documents matching the search criteria (think about using index patterns or data streams with many backing indices),\
would potentially result in a \"shard refresh storm\" increasing query latency as a result of the search thread\
waiting on all shard refreshes to complete before being able to initiate and carry out the search operation.\
After introducing this change the search thread will just need to wait for refreshes to be completed on shards\
including relevant data. Note that execution of the shard pre-filter and the corresponding \"can match\" phase where\
rewriting happens, depends on the overall number of shards involved and on whether there is at least one of them\
returning a non-empty result (see 'pre_filter_shard_size' setting to understand how to control this behaviour).\
Elasticsearch does the rewrite operation on the data node in the so called \"can match\" phase, taking advantage of\
the fact that, at that moment, we can access index mappings and extract information about constant keyword fields\
and their values. This means we still\"fan-out\" search queries from the coordinator node to involved data nodes.\
Rewriting queries based on index mappings is not possible on the coordinator node because the coordinator node is\
missing index mapping information."
notable: true
5 changes: 0 additions & 5 deletions docs/changelog/96243.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions docs/changelog/96251.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96540.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions docs/changelog/96551.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96606.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96668.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions docs/changelog/96738.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions docs/changelog/96782.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96785.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96821.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions docs/changelog/96843.yaml

This file was deleted.

5 changes: 5 additions & 0 deletions docs/changelog/97041.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97041
summary: Introduce downsampling configuration for data stream lifecycle
area: Data streams
type: feature
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97079.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97079
summary: Enable Serverless API protections dynamically
area: Infra/REST API
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97111.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97111
summary: Fix cluster settings update task acknowledgment
area: Cluster Coordination
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97142.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97142
summary: The model loading service should not notify listeners in a sync block
area: Machine Learning
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97159.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97159
summary: Improve exists query rewrite
area: Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97203.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97203
summary: Fix possible NPE when transportversion is null in `MainResponse`
area: Infra/REST API
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97208.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97208
summary: Improve match query rewrite
area: Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97209.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97209
summary: Improve prefix query rewrite
area: Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97224.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97224
summary: Remove exception wrapping in `BatchedRerouteService`
area: Allocation
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97234.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97234
summary: Add "operator" field to authenticate response
area: Authorization
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97274.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97274
summary: Improve model downloader robustness
area: Machine Learning
type: bug
issues: []
2 changes: 1 addition & 1 deletion docs/plugins/analysis-nori.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ Which responds with:

The `nori_part_of_speech` token filter removes tokens that match a set of
part-of-speech tags. The list of supported tags and their meanings can be found here:
{lucene-core-javadoc}/../analyzers-nori/org/apache/lucene/analysis/ko/POS.Tag.html[Part of speech tags]
{lucene-core-javadoc}/../analysis/nori/org/apache/lucene/analysis/ko/POS.Tag.html[Part of speech tags]

It accepts the following setting:

Expand Down
23 changes: 19 additions & 4 deletions docs/reference/data-management.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,17 +20,32 @@ so you can move it to less expensive, less performant hardware.
For your oldest data, what matters is that you have access to the data.
It's ok if queries take longer to complete.

To help you manage your data, {es} enables you to:
To help you manage your data, {es} offers you:

* <<index-lifecycle-management, {ilm-cap}>> ({ilm-init}) to manage both indices and data streams and it is fully customisable, and
* <<data-stream-lifecycle, Data stream lifecycle>> which is the built-in lifecycle of data streams and addresses the most
common lifecycle management needs.

preview::["The built-in data stream lifecycle is in technical preview and may be changed or removed in a future release. Elastic will apply best effort to fix any issues, but this feature is not subject to the support SLA of official GA features."]

**{ilm-init}** can be used to manage both indices and data streams and it allows you to:

* Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}.
Data older than this period can be deleted by {es}.
* Define <<data-tiers, multiple tiers>> of data nodes with different performance characteristics.
* Automatically transition indices through the data tiers according to your performance needs and retention policies
with <<index-lifecycle-management, {ilm}>> ({ilm-init}).
* Automatically transition indices through the data tiers according to your performance needs and retention policies.
* Leverage <<searchable-snapshots, searchable snapshots>> stored in a remote repository to provide resiliency
for your older indices while reducing operating costs and maintaining search performance.
* Perform <<async-search-intro, asynchronous searches>> of data stored on less-performant hardware.

**Data stream lifecycle** is less feature rich but is focused on simplicity, so it allows you to easily:

* Define the retention period of your data. The retention period is the minimum time your data will be stored in {es}.
Data older than this period can be deleted by {es} at a later time.
* Improve the performance of your data stream by performing background operations that will optimise the way your data
stream is stored.
--

include::ilm/index.asciidoc[]

include::datatiers.asciidoc[]

16 changes: 16 additions & 0 deletions docs/reference/data-streams/data-stream-apis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,14 @@ The following APIs are available for managing <<data-streams,data streams>>:
* <<promote-data-stream-api>>
* <<modify-data-streams-api>>

[[data-stream-lifecycle-api]]
The following APIs are available for managing the built-in lifecycle of data streams:

* <<data-streams-put-lifecycle,Update data stream lifecycle>> preview:[]
* <<data-streams-get-lifecycle,Get data stream lifecycle>> preview:[]
* <<data-streams-delete-lifecycle,Delete data stream lifecycle>> preview:[]
* <<data-streams-explain-lifecycle,Explain data stream lifecycle>> preview:[]

The following API is available for <<tsds,time series data streams>>:

* <<indices-downsample-data-stream>>
Expand All @@ -33,4 +41,12 @@ include::{es-repo-dir}/data-streams/promote-data-stream-api.asciidoc[]

include::{es-repo-dir}/data-streams/modify-data-streams-api.asciidoc[]

include::{es-repo-dir}/data-streams/lifecycle/apis/put-lifecycle.asciidoc[]

include::{es-repo-dir}/data-streams/lifecycle/apis/get-lifecycle.asciidoc[]

include::{es-repo-dir}/data-streams/lifecycle/apis/delete-lifecycle.asciidoc[]

include::{es-repo-dir}/data-streams/lifecycle/apis/explain-lifecycle.asciidoc[]

include::{es-repo-dir}/indices/downsample-data-stream.asciidoc[]
1 change: 1 addition & 0 deletions docs/reference/data-streams/data-streams.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -135,3 +135,4 @@ include::set-up-a-data-stream.asciidoc[]
include::use-a-data-stream.asciidoc[]
include::change-mappings-and-settings.asciidoc[]
include::tsds.asciidoc[]
include::lifecycle/index.asciidoc[]
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
[[dlm-delete-lifecycle]]
[[data-streams-delete-lifecycle]]
=== Delete the lifecycle of a data stream
++++
<titleabbrev>Delete Data Stream Lifecycle</titleabbrev>
++++

experimental::[]
preview::[]

Deletes the lifecycle from a set of data streams.

Expand All @@ -14,18 +14,18 @@ Deletes the lifecycle from a set of data streams.
* If the {es} {security-features} are enabled, you must have the `manage_data_stream_lifecycle` index privilege or higher to
use this API. For more information, see <<security-privileges>>.

[[dlm-delete-lifecycle-request]]
[[data-streams-delete-lifecycle-request]]
==== {api-request-title}

`DELETE _data_stream/<data-stream>/_lifecycle`

[[dlm-delete-lifecycle-desc]]
[[data-streams-delete-lifecycle-desc]]
==== {api-description-title}

Deletes the lifecycle from the specified data streams. If multiple data streams are provided but at least one of them
does not exist, then the deletion of the lifecycle will fail for all of them and the API will respond with `404`.

[[dlm-delete-lifecycle-path-params]]
[[data-streams-delete-lifecycle-path-params]]
==== {api-path-parms-title}

`<data-stream>`::
Expand All @@ -41,7 +41,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=ds-expand-wildcards]
+
Defaults to `open`.

[[dlm-delete-lifecycle-example]]
[[data-streams-delete-lifecycle-example]]
==== {api-examples-title}

////
Expand Down
Loading

0 comments on commit 5ee6598

Please sign in to comment.