Skip to content

Commit

Permalink
Remove duplicate 'the the' (#116023)
Browse files Browse the repository at this point in the history
There were many places where `the the` was typed, in comments, docs and messages. All were incorrect and replaces with a single `the`
  • Loading branch information
craigtaverner authored Oct 31, 2024
1 parent 0f38b2b commit c9c1765
Show file tree
Hide file tree
Showing 31 changed files with 33 additions and 33 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
* }
* }
* </pre>
* Will copy the entire core Rest API specifications (assuming the project has tests) and any of the the X-pack specs starting with enrich*.
* Will copy the entire core Rest API specifications (assuming the project has tests) and any of the X-pack specs starting with enrich*.
* It is recommended (but not required) to also explicitly declare which core specs your project depends on to help optimize the caching
* behavior.
* <i>For example:</i>
Expand All @@ -66,7 +66,7 @@
* }
* }
* </pre>
* Will copy any of the the x-pack tests that start with graph, and will copy the X-pack graph specification, as well as the full core
* Will copy any of the x-pack tests that start with graph, and will copy the X-pack graph specification, as well as the full core
* Rest API specification.
* <p>
* Additionally you can specify which sourceSetName resources should be copied to. The default is the yamlRestTest source set.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Find this by https://www.mongodb.com/docs/atlas/tutorial/connect-to-your-cluster
In this example, we'll use the `sample_mflix` database.
* *Collection*: The name of the collection you want to sync.
In this example, we'll use the `comments` collection of the `sample_mflix` database.
* *Username*: The username you created earlier, in the the setup phase.
* *Username*: The username you created earlier, in the setup phase.
* *Password*: The password you created earlier.

Keep these details handy!
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/esql/esql-query-api.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ supports this parameter for CSV responses.
`drop_null_columns`::
(Optional, boolean) Should columns that are entirely `null` be removed from
the `columns` and `values` portion of the results? Defaults to `false`. If
`true` the the response will include an extra section under the name
`true` the response will include an extra section under the name
`all_columns` which has the name of all columns.

`format`::
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ml/anomaly-detection/apis/get-job.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ value.
(string) Reserved for future use, currently set to `anomaly_detector`.

`job_version`::
(string) The {ml} configuration version number at which the the job was created.
(string) The {ml} configuration version number at which the job was created.

NOTE: From {es} 8.10.0, a new version number is used to
track the configuration and state changes in the {ml} plugin. This new
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ feature_extractors=[
feature_name="title_bm25",
query={"match": {"title": "{{query}}"}}
),
# We want to use the the number of matched terms in the title field as a feature:
# We want to use the number of matched terms in the title field as a feature:
QueryFeatureExtractor(
feature_name="title_matched_term_count",
query={
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ When the above `dictionary` parameter is specified, the <<search-application-sea
* It verifies that `query_string` and `default_field` are both strings
* It accepts `default_field` only if it takes the values `title` or `description`
If the parameters are not valid, the the <<search-application-search, search application search>> API will return an error.
If the parameters are not valid, the <<search-application-search, search application search>> API will return an error.
[source,console]
----
POST _application/search_application/my-app/_search
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ This command resets the password to an auto-generated value.
./bin/elasticsearch-reset-password -u elastic
----
+
If you want to set the password to a specific value, run the command with the
If you want to set the password to a specific value, run the command with the
interactive (`-i`) parameter.
+
[source,shell]
Expand All @@ -93,7 +93,7 @@ interactive (`-i`) parameter.
./bin/elasticsearch-reset-password -u kibana_system
----

. Save the new passwords. In the next step, you'll add the the password for the
. Save the new passwords. In the next step, you'll add the password for the
`kibana_system` user to {kib}.

*Next*: <<add-built-in-users,Configure {kib} to connect to {es} with a password>>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@ private static void skipToListStart(XContentParser parser) throws IOException {
}
}

// read a list without bounds checks, assuming the the current parser is always on an array start
// read a list without bounds checks, assuming the current parser is always on an array start
private static List<Object> readListUnsafe(XContentParser parser, Supplier<Map<String, Object>> mapFactory) throws IOException {
assert parser.currentToken() == Token.START_ARRAY;
ArrayList<Object> list = new ArrayList<>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ public void refresh() {
* Customizes {@link com.amazonaws.auth.WebIdentityTokenCredentialsProvider}
*
* <ul>
* <li>Reads the the location of the web identity token not from AWS_WEB_IDENTITY_TOKEN_FILE, but from a symlink
* <li>Reads the location of the web identity token not from AWS_WEB_IDENTITY_TOKEN_FILE, but from a symlink
* in the plugin directory, so we don't need to create a hardcoded read file permission for the plugin.</li>
* <li>Supports customization of the STS endpoint via a system property, so we can test it against a test fixture.</li>
* <li>Supports gracefully shutting down the provider and the STS client.</li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ public static String escapePath(Path path) {
}

/**
* Recursively copy the the source directory to the target directory, preserving permissions.
* Recursively copy the source directory to the target directory, preserving permissions.
*/
public static void copyDirectory(Path source, Path target) throws IOException {
Files.walkFileTree(source, new SimpleFileVisitor<>() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ public void testSystemIndexManagerUpgradesMappings() throws Exception {
}

/**
* Check that if the the SystemIndexManager finds a managed index with mappings that claim to be newer than
* Check that if the SystemIndexManager finds a managed index with mappings that claim to be newer than
* what it expects, then those mappings are left alone.
*/
public void testSystemIndexManagerLeavesNewerMappingsAlone() throws Exception {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2402,7 +2402,7 @@ public Metadata build(boolean skipNameCollisionChecks) {
assert previousIndicesLookup.equals(buildIndicesLookup(dataStreamMetadata(), indicesMap));
indicesLookup = previousIndicesLookup;
} else if (skipNameCollisionChecks == false) {
// we have changes to the the entity names so we ensure we have no naming collisions
// we have changes to the entity names so we ensure we have no naming collisions
ensureNoNameCollisions(aliasedIndices.keySet(), indicesMap, dataStreamMetadata());
}
assert assertDataStreams(indicesMap, dataStreamMetadata());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ public Settings getByPrefix(String prefix) {
if (prefix.isEmpty()) {
return this;
}
// create the the next prefix right after the given prefix, and use it as exclusive upper bound for the sub-map to filter by prefix
// create the next prefix right after the given prefix, and use it as exclusive upper bound for the sub-map to filter by prefix
// below
char[] toPrefixCharArr = prefix.toCharArray();
toPrefixCharArr[toPrefixCharArr.length - 1]++;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -954,7 +954,7 @@ private void notifyFailureOnceAllOutstandingRequestAreDone(Exception e) {

void createRetentionLease(final long startingSeqNo, ActionListener<RetentionLease> listener) {
updateRetentionLease(syncListener -> {
// Clone the peer recovery retention lease belonging to the source shard. We are retaining history between the the local
// Clone the peer recovery retention lease belonging to the source shard. We are retaining history between the local
// checkpoint of the safe commit we're creating and this lease's retained seqno with the retention lock, and by cloning an
// existing lease we (approximately) know that all our peers are also retaining history as requested by the cloned lease. If
// the recovery now fails before copying enough history over then a subsequent attempt will find this lease, determine it is
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ public static String getRepositoryDataBlobName(long repositoryGeneration) {
/**
* Flag that is set to {@code true} if this instance is started with {@link #metadata} that has a higher value for
* {@link RepositoryMetadata#pendingGeneration()} than for {@link RepositoryMetadata#generation()} indicating a full cluster restart
* potentially accounting for the the last {@code index-N} write in the cluster state.
* potentially accounting for the last {@code index-N} write in the cluster state.
* Note: While it is true that this value could also be set to {@code true} for an instance on a node that is just joining the cluster
* during a new {@code index-N} write, this does not present a problem. The node will still load the correct {@link RepositoryData} in
* all cases and simply do a redundant listing of the repository contents if it tries to load {@link RepositoryData} and falls back
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ protected final void finishPart(T partId) {
}

/**
* Write the contents of {@link #buffer} to storage. Implementations should call {@link #finishPart} at the end to track the the chunk
* Write the contents of {@link #buffer} to storage. Implementations should call {@link #finishPart} at the end to track the chunk
* of data just written and ready {@link #buffer} for the next write.
*/
protected abstract void flushBuffer() throws IOException;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ final void add(QueryToFilterAdapter filter) throws IOException {
}

/**
* Build the the adapter or {@code null} if the this isn't a valid rewrite.
* Build the adapter or {@code null} if this isn't a valid rewrite.
*/
public final T build() throws IOException {
if (false == valid || aggCtx.enableRewriteToFilterByFilter() == false) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ private static SignificantTermsAggregatorSupplier bytesSupplier() {

/**
* Whether the aggregation will execute. If the main query matches no documents and parent aggregation isn't a global or terms
* aggregation with min_doc_count = 0, the the aggregator will not really execute. In those cases it doesn't make sense to load
* aggregation with min_doc_count = 0, the aggregator will not really execute. In those cases it doesn't make sense to load
* global ordinals.
* <p>
* Some searches that will never match can still fall through and we endup running query that will produce no results.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ public synchronized void removeReleasable(Aggregator aggregator) {
// Removing an aggregator is done after calling Aggregator#buildTopLevel which happens on an executor thread.
// We need to synchronize the removal because he AggregatorContext it is shared between executor threads.
assert releaseMe.contains(aggregator)
: "removing non-existing aggregator [" + aggregator.name() + "] from the the aggregation context";
: "removing non-existing aggregator [" + aggregator.name() + "] from the aggregation context";
releaseMe.remove(aggregator);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@
* snapshots, we load the {@link org.elasticsearch.snapshots.SnapshotInfo} for the source snapshot and check for shard snapshot
* failures of the relevant indices.</li>
* <li>Once all shard counts are known and the health of all source indices data has been verified, we populate the
* {@code SnapshotsInProgress.Entry#clones} map for the clone operation with the the relevant shard clone tasks.</li>
* {@code SnapshotsInProgress.Entry#clones} map for the clone operation with the relevant shard clone tasks.</li>
* <li>After the clone tasks have been added to the {@code SnapshotsInProgress.Entry}, master executes them on its snapshot thread-pool
* by invoking {@link org.elasticsearch.repositories.Repository#cloneShardSnapshot} for each shard that is to be cloned. Each completed
* shard snapshot triggers a call to the {@link org.elasticsearch.snapshots.SnapshotsService#masterServiceTaskQueue} which updates the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
public class NodeInfoTests extends ESTestCase {

/**
* Check that the the {@link NodeInfo#getInfo(Class)} method returns null
* Check that the {@link NodeInfo#getInfo(Class)} method returns null
* for absent info objects, and returns the right thing for present info
* objects.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ public void testCleanup() {
discoveryNodes[i] = randomDiscoveryNode();
}

// we stop tracking the the oldest absent node(s) when only 1/3 of the tracked nodes are present
// we stop tracking the oldest absent node(s) when only 1/3 of the tracked nodes are present
final int cleanupNodeCount = (discoveryNodes.length - 2) / 3;

final DiscoveryNodes.Builder cleanupNodesBuilder = new DiscoveryNodes.Builder().add(masterNode)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ private static long time(String time, ZoneId zone) {
}

/**
* The the last "fully defined" transitions in the provided {@linkplain ZoneId}.
* The last "fully defined" transitions in the provided {@linkplain ZoneId}.
*/
private static ZoneOffsetTransition lastTransitionIn(ZoneId zone) {
List<ZoneOffsetTransition> transitions = zone.getRules().getTransitions();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ public void testAllocationBucketsBreaker() {

// make sure used bytes is greater than the total circuit breaker limit
breaker.addWithoutBreaking(200);
// make sure that we check on the the following call
// make sure that we check on the following call
for (int i = 0; i < 1023; i++) {
multiBucketConsumer.accept(0);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ protected SignificanceHeuristic getHeuristic() {
/**
* @param includeNegatives value for this test run, should the scores include negative values.
* @param backgroundIsSuperset value for this test run, indicates in NXY significant terms if the background is indeed
* a superset of the the subset, or is instead a disjoint set
* a superset of the subset, or is instead a disjoint set
* @return A random instance of an NXY heuristic to test
*/
protected abstract SignificanceHeuristic getHeuristic(boolean includeNegatives, boolean backgroundIsSuperset);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ public BlobCacheMetrics(MeterRegistry meterRegistry) {
),
meterRegistry.registerDoubleHistogram(
"es.blob_cache.population.throughput.histogram",
"The throughput observed when populating the the cache",
"The throughput observed when populating the cache",
"MiB/second"
),
meterRegistry.registerLongCounter(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,7 @@ public void execute(TestRequest request, TestTask task, ActionListener<TestRespo
assertThat(task, nullValue());
});

logger.trace("Getting the the final response from the index");
logger.trace("Getting the final response from the index");
StoredAsyncResponse<TestResponse> response = getResponse(responseHolder.get().id, TimeValue.ZERO);
if (success) {
assertThat(response.getException(), nullValue());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,7 @@ public void execute(TestRequest request, TestTask task, ActionListener<TestRespo
assertThat(task, nullValue());
});

logger.trace("Getting the the final response from the index");
logger.trace("Getting the final response from the index");
StoredAsyncResponse<TestResponse> response = getResponse(responseHolder.get().id, TimeValue.ZERO);
if (success) {
assertThat(response.getException(), nullValue());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ public void onFailure(Exception e) {
});
assertUnblockIn10s(latch2);

// the the client answer
// the client answer
unblock.countDown();
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ public static SearchSourceBuilder sourceBuilder(QueryContainer container, QueryB
// set page size
if (size != null) {
int sz = container.limit() > 0 ? Math.min(container.limit(), size) : size;
// now take into account the the minimum page (if set)
// now take into account the minimum page (if set)
// that is, return the multiple of the minimum page size closer to the set size
int minSize = container.minPageSize();
sz = minSize > 0 ? (Math.max(sz / minSize, 1) * minSize) : sz;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ private Object handleTargetType(Object object) {
return DateUtils.asDateTimeWithMillis(((Number) object).longValue(), zoneId);
} else if (dataType.isInteger()) {
// MIN and MAX need to return the same type as field's and SUM a long for integral types, but ES returns them always as
// floating points -> convert them in the the SELECT pipeline, if needed
// floating points -> convert them in the SELECT pipeline, if needed
return convert(object, dataType);
}
}
Expand Down

0 comments on commit c9c1765

Please sign in to comment.