diff --git a/docs/antora.yml b/docs/antora.yml index ecb674bbf..c8afa0063 100644 --- a/docs/antora.yml +++ b/docs/antora.yml @@ -14,6 +14,7 @@ asciidoc: minor-version: '6.0-SNAPSHOT' # The snapshot version for installing with brew version-brew: '6.0.0-SNAPSHOT' + java-client-standalone-version: '5.5.0-BETA' # Allows us to use UI macros. See https://docs.asciidoctor.org/asciidoc/latest/macros/ui-macros/ experimental: true snapshot: true @@ -23,6 +24,7 @@ asciidoc: # Must be lowercase because this is how the version appears in the docs page-latest-supported-mc: '5.6-snapshot' page-latest-supported-java-client: '6.0.0-SNAPSHOT' + page-latest-supported-java-client-new: '5.5.0-BETA' # https://github.com/hazelcast/hazelcast-go-client/releases page-latest-supported-go-client: '1.4.2' # https://github.com/hazelcast/hazelcast-cpp-client/releases @@ -37,5 +39,11 @@ asciidoc: page-latest-supported-clc: '5.4.1' open-source-product-name: 'Community Edition' enterprise-product-name: 'Enterprise Edition' + java-client-new: 'Java Client (Standalone)' + java-client: 'Java Client and Embedded Server' + url-cloud-signup: https://cloud.hazelcast.com/sign-up + hazelcast-cloud: Cloud + ucn: User Code Namespaces + ucd: User Code Deployment nav: - modules/ROOT/nav.adoc diff --git a/docs/modules/clients/pages/java.adoc b/docs/modules/clients/pages/java.adoc index 1d2d5fa54..f0707ab7a 100644 --- a/docs/modules/clients/pages/java.adoc +++ b/docs/modules/clients/pages/java.adoc @@ -1,19 +1,61 @@ = Java Client :page-api-reference: https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc -:url-cloud-signup: https://cloud.hazelcast.com/sign-up -:page-toclevels: 3 +:page-toclevels: 1 +:description: Hazelcast provides a {java-client} within the standard distribution you can start using right away, and also a lightweight {java-client-new} that is available in Beta. [[java-client]] -TIP: For the latest Java API documentation, see https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc[Hazelcast Java Client docs]. +// check redirects -To get started, include the `hazelcast.jar` dependency in your classpath. Once included, you can start using this client as if -you are using the Hazelcast API. The differences are discussed in the below sections. +== Overview -NOTE: If you have a Hazelcast {enterprise-product-name} license, you do not need to set the license key in your Hazelcast Java clients to use the xref:getting-started:editions.adoc#features-in-hazelcast-enterprise[{enterprise-product-name} features] - setting it on the member side is enough. In this case, you only need to include the `hazelcast-enterprise.jar` dependency in your classpath. +Hazelcast provides a {java-client} which you can use to connect to a Hazelcast cluster. `hazelcast-.jar` is bundled in the Hazelcast standard package, so just add `hazelcast-.jar` to your classpath and you can start using this client as if you are using the Hazelcast API. -If you prefer to use Maven, simply add the `hazelcast` dependency -to your `pom.xml` (or the `hazelcast-enterprise` dependency, if you want the client to use {enterprise-product-name} features provided that you have the Hazelcast {enterprise-product-name} license), -which you may already have done to start using Hazelcast: +If you are interested in using a standalone or lightweight Java client, you can try the {java-client-new}. This client is currently available as Beta functionality but can interact with a Hazelcast cluster without being a full member. Please note that the {java-client-new} doesn't have full feature parity with the {java-client} yet and is not recommended for production environments. For more info, see xref:java#java-client-standalone-beta[]. + +NOTE: Where there are specific differences between {java-client} and {java-client-new}, this documentation will specify the appropriate client. Otherwise you can assume that generic references to client refer to both versions of the Java client. + +// check production recommendation + +Both native ({java-client}) and standalone ({java-client-new}) clients enable you to use the Hazelcast API, with this page explaining any differences or technical details that affect usage. This page should be read alongside the respective Javadoc-generated API documentation available from within your IDE and the following links: + +* https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc[Hazelcast {java-client} API documentation] +* https://docs.hazelcast.org/hazelcast-java-client/{page-latest-supported-java-client-new}/javadoc[Hazelcast {java-client-new} API documentation] + +== Get started + +* xref:java#get-started-with-java-client-and-embedded-server[] +* xref:java#get-started-with-java-client-standalone-beta[] + +=== Get started with {java-client} + +To get started using the {java-client}, you need to include the `hazelcast.jar` dependency in your classpath. You can then start using this client as if +you are using the Hazelcast API. + +NOTE: If you have a Hazelcast {enterprise-product-name} license, you don't need to set the license key in your Hazelcast Java clients to use the xref:getting-started:editions.adoc#features-in-hazelcast-enterprise[{enterprise-product-name} features]. You only have to set it on the member side, and include the `hazelcast-enterprise-.jar` dependency in your classpath. + +If you prefer to use Maven, make sure you have added the appropriate `hazelcast` or `hazelcast-enterprise` dependency to your `pom.xml`: + +[tabs] +==== +{enterprise-product-name}:: ++ +-- +Add the `hazelcast-enterprise` dependency to your `pom.xml`: + +[source,xml,subs="attributes+"] +---- + + com.hazelcast + hazelcast-enterprise + {full-version} + +---- +NOTE: +-- +{open-source-product-name}:: ++ +-- +Add the `hazelcast` dependency to your `pom.xml`: [source,xml,subs="attributes+"] ---- @@ -24,14 +66,73 @@ which you may already have done to start using Hazelcast: ---- -You can find Hazelcast Java client's code samples https://github.com/hazelcast/hazelcast-code-samples/tree/master/clients[here^]. +-- +==== + +You can find {java-client} code samples in the https://github.com/hazelcast/hazelcast-code-samples/tree/master/clients[Hazelcast Code Samples repository]. + +TIP: For a tutorial on getting started with Java in an embedded topology, see xref:getting-started:get-started-java.adoc[]. + +=== Get started with {java-client-new} (BETA) + +To get started using the {java-client-new}, you need to add the `hazelcast-java-client` dependency to your pom.xml, as shown below. You can then start using this client as if +you are using the Hazelcast API. +[source,xml,subs="attributes+"] +---- + + com.hazelcast + hazelcast-java-client + {java-client-standalone-version} + +---- + +If you are using `hazelcast-enterprise-java-client`, you need to add the `hazelcast-enterprise-java-client` dependency and private hazelcast repository to your pom.xml file, as shown below: +[source,xml,subs="attributes+"] +---- + + com.hazelcast + hazelcast-enterprise-java-client + {java-client-standalone-version} + + + + + private-repository + Hazelcast Private Repository + https://repository.hazelcast.com/release/ + + true + + + false + + + +---- + +==== Migrate to {java-client-new} (BETA) +To migrate an application from the {java-client} to the {java-client-new}, you only have to update the dependency as described above. + +[java-client-standalone] +==== {java-client-new} (BETA) -== Client API +The {java-client-new} is only available as a Beta release and does not have full feature parity with the {java-client}. Please note the following differences and restrictions: -The client API is your gateway to access your Hazelcast cluster, including distributed objects and data pipelines (jobs). +// check standalone -The first step is the configuration. You can configure the Java client xref:configuration:understanding-configuration.adoc[declaratively or -programmatically]. We use the programmatic approach for this section. +* Hazelcast Cloud is not supported +* You cannot use the {java-client} and the {java-client-new} on the same JVM +* Any methods that raise the`UnsupportedOperationException` exception are not available e.g. `addLocalEntryListener(@Nonnull MapListener listener)` +* MultiMap and Set are not supported data structures +* Some client system properties are not supported (see individual notes) + +=== Client API +The Client API is your gateway to access your Hazelcast cluster, including distributed objects and data pipelines (jobs). + +First, you must configure your client. You can use either xref:configuration:understanding-configuration.adoc[declarative or +programmatic configuration] to do this. + +The following examples demonstrate the programmatic approach. [source,java] ---- @@ -40,15 +141,15 @@ clientConfig.setClusterName("dev"); clientConfig.getNetworkConfig().addAddress("10.90.0.1", "10.90.0.2:5702"); ---- -See the <> for more information. +For further information on client configuration, see <>. -The second step is initializing the `HazelcastInstance` to be connected to the cluster. +After completing the client configuration, you must create an `HazelcastClient` instance that will initialize and connect to the client based on the specified configuration: -``` +```java HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig); ``` -To create a map and populate it with some data: +You can create a distributed map and populate it with some data as follows: [source,java] ---- @@ -59,18 +160,94 @@ mapCustomers.put("2", new Customer("Ali", "Selam")); mapCustomers.put("3", new Customer("Avi", "Noyan")); ---- -For details about using maps, see xref:data-structures:map.adoc[]. +For further information about using maps, see xref:data-structures:map.adoc[]. -As the final step, if and when you are done with your client, you can shut it down as shown below: +Lastly, after setting up your client, you can shut it down as follows: ```java client.shutdown(); ``` -The above code line releases all the used resources and closes connections to the cluster. +This command releases all used resources and closes all connections to the cluster. + +== Distributed data structures + +=== Supported data structures + +Hazelcast offers distributed implementations of many common data structures, most of which are supported by the {java-client} and {java-client-new}. + +When you use clients in other languages, you should review the appropriate client documentation for exceptions and details. As a general rule, you should configure these data structures on the server side and +access them through a proxy on the client side. + +=== Use Map + +You can use any distributed map object with the client, as follows: + +[source,java] +---- +Imap map = client.getMap("myMap"); + +map.put(1, "John"); +String value= map.get(1); +map.remove(1); +---- + +The `addLocalEntryListener()` and `localKeySet()` methods are not supported because locality is ambiguous for the client. For more information, see xref:data-structures:map.adoc[]. + +=== Use MultiMap + +NOTE: This section is only applicable to the {java-client}. + +You can use a distributed multiMap object with the {java-client}, as follows: + +[source,java] +---- +MultiMap multiMap = client.getMultiMap("myMultiMap"); + +multiMap.put(1,"John"); +multiMap.put(1,"Mary"); + +Collection values = multiMap.get(1); +---- + +The `addLocalEntryListener()`, `localKeySet()` and `getLocalMultiMapStats()` methods are not +supported because locality is ambiguous for the client. For more information, see xref:data-structures:multimap.adoc[]. + +=== Use Queue + +You can use a distributed Queue object with the client, as follows: + +[source,java] +---- +IQueue myQueue = client.getQueue("theQueue"); +myQueue.offer("John") +---- + +The `getLocalQueueStats()` method is not supported because locality is ambiguous for the client. +For more information, see xref:data-structures:queue.adoc[]. + +=== Use Topic -=== Client Cluster Routing Modes +The `getLocalTopicStats()` method is not supported because locality is ambiguous for the client. + +=== Other supported distributed structures + +The distributed data structures listed below are also supported. +The logic is the same for both member and client side, so see the specific sections for more information on usage. + +* xref:data-structures:replicated-map.adoc[Replicated Map] +* xref:data-structures:list.adoc[List] +* xref:data-structures:set.adoc[Set] (not supported by {java-client-new}) +* xref:data-structures:iatomiclong.adoc[IAtomicLong] +* xref:data-structures:iatomicreference.adoc[IAtomicReference] +* xref:data-structures:icountdownlatch.adoc[ICountDownLatch] +* xref:data-structures:isemaphore.adoc[ISemaphore] +* xref:data-structures:flake-id-generator.adoc[FlakeIdGenerator] +* xref:data-structures:fencedlock.adoc[Lock] +* xref:data-structures:cpmap.adoc[CPMap] +== Configure the client +=== Client cluster routing modes The cluster routing mode specifies how the client connects to the cluster. It can currently be used only with Java and .NET clients. NOTE: In previous releases, this functionality was known as the client operation mode and could be configured as smart or unisocket. @@ -98,7 +275,7 @@ In `ALL_MEMBERS` cluster routing mode, clients connect to each cluster member. Since clients are aware of xref:overview:data-partitioning.adoc[data partitions], they are able to send an operation directly to the cluster member that owns the partition holding their data, which increases the overall throughput and efficiency. + -If <> is enabled on your clients, and the `ADVANCED_CP` +If <> is enabled on your clients, and the `ADVANCED_CP` license is present on your Enterprise cluster, then clients in this routing mode can use this to send CP operations directly to group leaders wherever possible, even after leadership changes. @@ -142,271 +319,252 @@ For information on configuring the cluster routing mode, see <>. +While the client initially tries to connect to one of the members in the `ClientNetworkConfig.addressList`, it's possible that not all members are available. +Instead of giving up, throwing an exception and stopping, the client continues to attempt to connect as configured. +For information on the available configuration, see <>. -The client executes each operation through the already established connection(s) to the cluster. -If these connection(s) disconnect or drop, the client tries to reconnect as configured. +The client executes each operation through the already established connection to the cluster. +If this connection disconnects or drops, the client tries to reconnect as configured. If using the `MULTI_MEMBER` cluster routing mode, and the cluster has multiple partition groups defined and the client connection to a partition group fails, connectivity is maintained by failing over to an alternative partition group. If the connection is lost, which occurs only if all members of the partition group become unavailable, there is no attempt to retry the connection before failing over to another partition group. -For further information on client cluster routing modes, see <>. +For more information on client cluster routing modes, see <>. -**Handling Retry-able Operation Failure:** +==== Retry-able operations failure While sending the requests to related members, operations can fail due to various reasons. Read-only operations are retried by default. If you want to enable retry for the other operations, -you can set the `redoOperation` to `true`. See the <>. +you can set the `redoOperation` to `true`. For more info, see <>. You can set a timeout for retrying the operations sent to a member. -This can be provided by using the property `hazelcast.client.invocation.timeout.seconds` in `ClientProperties`. -The client retries an operation within this given period, of course, if it is a read-only operation or -you enabled the `redoOperation` as stated in the above paragraph. -This timeout value is important when there is a failure resulted by either of the following causes: +This can be provided by using the `hazelcast.client.invocation.timeout.seconds` property in `ClientProperties`. +The client retries an operation within this given period, if it is a read-only operation, or if +you enabled the `redoOperation` as described above. +This timeout value is important when there is a failure caused by any of the following: -* Member throws an exception. -* Connection between the client and member is closed. -* Client's heartbeat requests are timed out. +* Member throws an exception +* Connection between the client and member is closed +* Client's heartbeat requests time out -See the <> -for the description of the `hazelcast.client.invocation.timeout.seconds` property. +See <> for a description of the `hazelcast.client.invocation.timeout.seconds` property. -When any failure happens between a client and member -(such as an exception on the member side or connection issues), an operation is retried if: +When any failure happens between a client and member (such as an exception on the member side or connection issues), an operation is retried if: * it is certain that it has not run on the member yet -* or if it is idempotent such as a read-only operation, i.e., retrying does not have a side effect. - -If it is not certain whether the operation has run on the member, -then the non-idempotent operations are not retried. -However, as explained in the first paragraph of this section, -you can force all client operations to be retried (`redoOperation`) -when there is a failure between the client and member. -But in this case, you should know that some operations may run multiple times causing conflicts. -For example, assume that your client sent a `queue.offer` operation to the member and -then the connection is lost. Since there will be no respond for this operation, -you will not know whether it has run on the member or not. If you enabled `redoOperation`, -that `queue.offer` operation may rerun and this causes the same objects to be offered twice in the member's queue. - -=== Using Supported Distributed Data Structures +* it is idempotent such as a read-only operation, i.e. retrying does not have a side effect. -Most of the distributed data structures are supported by the Java client. -When you use clients in other languages, you should check for the exceptions. +If it is not certain whether the operation has run on the member, then the non-idempotent operations are not retried. +However, as explained earlier, you can force all client operations to be retried (`redoOperation`) when there is a failure between the client and member. +But in this case, some operations may run multiple times and therefore cause conflicts. +For example, assume that your client sent a `queue.offer` operation to the member and then the connection is lost. Because there is no respond for this operation, you won't know whether it has run on the member or not. If you enabled `redoOperation`, that specific `queue.offer` operation may rerun and this will cause the same objects to be offered twice in the member's queue. -As a general rule, you configure these data structures on the server side and -access them through a proxy on the client side. - -==== Using Map with Java Client +=== Configure client listeners -You can use any distributed map object with the client, as shown below. +You can configure global event listeners using `ListenerConfig` as the following examples show: [source,java] ---- -Imap map = client.getMap("myMap"); - -map.put(1, "John"); -String value= map.get(1); -map.remove(1); +ClientConfig clientConfig = new ClientConfig(); +ListenerConfig listenerConfig = new ListenerConfig(LifecycleListenerImpl); +clientConfig.addListenerConfig(listenerConfig); ---- -Locality is ambiguous for the client, so `addLocalEntryListener()` and -`localKeySet()` methods are not supported. See xref:data-structures:map.adoc[] -for more information. - -==== Using MultiMap with Java Client - -A MultiMap usage example is shown below. - [source,java] ---- -MultiMap multiMap = client.getMultiMap("myMultiMap"); - -multiMap.put(1,"John"); -multiMap.put(1,"Mary"); - -Collection values = multiMap.get(1); +ClientConfig clientConfig = new ClientConfig(); +ListenerConfig listenerConfig = new ListenerConfig("com.hazelcast.example.MembershipListenerImpl"); +clientConfig.addListenerConfig(listenerConfig); ---- -The `addLocalEntryListener()`, `localKeySet()` and `getLocalMultiMapStats()` methods are not -supported because locality is ambiguous for the client. -See xref:data-structures:multimap.adoc[] for more information. - -==== Using Queue with Java Client +You can add the following types of event listeners: -An example usage is shown below. +* `LifecycleListener`` +* `MembershipListener`` +* `DistributedObjectListener`` -[source,java] ----- -IQueue myQueue = client.getQueue("theQueue"); -myQueue.offer("John") ----- +=== Configure client near cache -The `getLocalQueueStats()` method is not supported because locality is ambiguous for the client. -See xref:data-structures:queue.adoc[] for more information. +To increase the performance of local read operations, the distributed map supports a local near cache for remotely stored entries. Because the client always requests data from +the cluster members, it can be helpful in some use cases to configure a near cache on the client side. For a detailed explanation of this feature and its configuration, see xref:performance:near-cache.adoc[Near cache]. -==== Using Topic with Java Client +=== Configure client cluster name -The `getLocalTopicStats()` method is not supported because locality is ambiguous for the client. +Clients should provide a cluster name in order to connect to the cluster. +You can configure it using `ClientConfig`, as the following example shows: -==== Using Other Supported Distributed Structures +``` +clientConfig.setClusterName("dev"); +``` -The distributed data structures listed below are also supported by the client. -Since their logic is the same in both the member side and client side, you can see -their sections as listed below. +[[client-security-configuration]] +=== Configure client security +[blue]*Hazelcast {enterprise-product-name}* -* xref:data-structures:replicated-map.adoc[Replicated Map] -* xref:data-structures:list.adoc[List] -* xref:data-structures:set.adoc[Set] -* xref:data-structures:iatomiclong.adoc[IAtomicLong] -* xref:data-structures:iatomicreference.adoc[IAtomicReference] -* xref:data-structures:icountdownlatch.adoc[ICountDownLatch] -* xref:data-structures:isemaphore.adoc[ISemaphore] -* xref:data-structures:flake-id-generator.adoc[FlakeIdGenerator] -* xref:data-structures:fencedlock.adoc[Lock] -* xref:data-structures:cpmap.adoc[CPMap] +You can define control mechanisms for clients to control authentication and authorisation. For more information, see xref:security:native-client-security.adoc[]. -=== Using Client Services +You can provide the Java client with an identity for cluster authentication. The identity of the connecting client is defined on the client side. +Usually, there are no security realms on the clients; only the identity defined in the security configuration. -Hazelcast provides the services discussed below for some common functionalities on the client side. +[tabs] +==== +XML:: ++ +-- -==== Using Distributed Executor Service +[source,xml] +---- + + ... + + + + ... + +---- +-- -The distributed executor service is for distributed computing. -It can be used to execute tasks on the cluster on a designated partition or on all the partitions. -It can also be used to process entries. See xref:computing:executor-service.adoc[] for more information. +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + security: + username-password: + username: uid=member1,dc=example,dc=com + password: s3crEt +---- +==== -``` -IExecutorService executorService = client.getExecutorService("default"); -``` +On the clients, you can use the same identity types as in the security realms: -After getting an instance of `IExecutorService`, you can use the instance as -the interface with the one provided on the server side. See -xref:computing:distributed-computing.adoc[] for detailed usage. +* `username-password` +* `token` +* `kerberos` (may require an additional security realm definition) +* `credentials-factory` -NOTE: This service is supported only by the Java client. +==== Security realms on the client side -==== Listening to Client Connection +Hazelcast offers limited support for security realms in the Java client. +You can configure the client to use JAAS login modules that can be referenced from +the Kerberos identity configuration. -If you need to track clients and you want to listen to their connection events, -you can use the `clientConnected()` and `clientDisconnected()` methods of the `ClientService` class. -This class must be run on the **member** side. The following is an example code. +[tabs] +==== +XML:: ++ +-- -[source,java] +[source,xml] ---- -include::ROOT:example$/clients/ListeningClients.java[tag=lc] + + + ACME.COM + krb5Initiator + + + + + + + + true + true + + + + + + + ---- +-- -==== Finding the Partition of a Key - -You use partition service to find the partition of a key. -It returns all partitions. See the example code below. - -[source,java] +YAML:: ++ +[source,yaml] ---- -PartitionService partitionService = client.getPartitionService(); +security: + kerberos: + realm: ACME.COM + security-realm: krb5Initiator + realms: + name: krb5Initiator + authentication: + jaas: + class-name: com.sun.security.auth.module.Krb5LoginModule + usage: REQUIRED + properties: + useTicketCache: true + doNotPrompt: true +---- +==== -//partition of a key -Partition partition = partitionService.getPartition(key); +For more information, see the appropriate API documentation for your client: -//all partitions -Set partitions = partitionService.getPartitions(); ----- +* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[{java-client-new} ClientSecurityConfig API documentation] +* https://docs.hazelcast.org/docs/{page-latest-supported-java-client-new}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[{java-client} ClientSecurityConfig API documentation] -==== Handling Lifecycle +[[classloader]] +=== Configure ClassLoader -Lifecycle handling performs: +You can configure a custom `classLoader` for your client. +It is used by the serialization service and loads any class specified in the configuration, including event listeners or ProxyFactories. -* checking if the client is running -* shutting down the client gracefully -* terminating the client ungracefully (forced shutdown) -* adding/removing lifecycle listeners. +[[configuring-direct-to-leader-routing]] +=== Configure CP direct-to-leader operation routing for clients -[source,java] ----- -LifecycleService lifecycleService = client.getLifecycleService(); +When operating a Hazelcast Enterprise cluster with the `ADVANCED_CP` license it is possible to configure clients to +leverage direct-to-leader routing for xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem] operations. When enabled, +this functionality enables clients to receive a mapping of CP group leadership from the cluster and use it to send +CP data structure operations directly to the relevant group leader. This leadership mapping is also updated whenever +leadership changes occur. -if(lifecycleService.isRunning()){ - //it is running -} +CP data structure reads and writes must be actioned by the CP leader responsible for the group involved. By leveraging +direct-to-leader routing for CP operations, clients are able to send all operations directly to their group leaders, +cutting out the need for intermediate hops through other cluster members. This allows clients to achieve lower latency and +higher throughput for their CP operations, while also reducing the pressure on the internal cluster network, resulting in +greater cluster stability. -//shutdown client gracefully -lifecycleService.shutdown(); ----- +This functionality is disabled by default and must be explicitly enabled. This is done because you should consider your +specific use-case for CP operation sending and assess the impact of direct to leader routing on your topology. In scenarios +where clients have increased latency to CP group leaders, it may be detrimental to route all operations directly to them +instead of using a faster internal cluster link and routing through another member. You should also consider that +direct-to-leader routing can put uneven pressure on the cluster if CP group leaders receive a substantially greater load than +other members of the cluster, which is particularly problematic when only one CP group leader is present. -=== Querying with SQL +NOTE: If a client does not have an active connection to a known CP group leader then the client will be unable to leverage +direct-to-leader CP operations and will fall back to default round-robin behaviour, sending the request to any available +cluster member instead. This feature provides no benefit when `SINGLE_MEMBER` routing is used as the client only has 1 +available connection to use for all operation sending. -To query a map using SQL: +You can enable CP direct-to-leader routing with a single configuration option, as the following example shows: [source,java] ---- -String query = - "SELECT * FROM customers csv_likes"; -try (SqlResult result = client.getSql().execute(query)) { - for (SqlRow row : result) { - System.out.println("" + row.getObject(0)); - } -} +ClientConfig clientConfig = new ClientConfig(); +clientConfig.setCPDirectToLeaderRoutingEnabled(true); ---- -For details about querying with SQL, see xref:query:sql-overview.adoc[]. - -=== Building Data Pipelines +The following code shows the equivalent declarative configuration: -To build a data pipeline: - -[source,java] ----- -Pipeline EvenNumberStream = Pipeline.create(); -EvenNumberStream.readFrom(TestSources.itemStream(10)) - .withoutTimestamps() - .filter(event -> event.sequence() % 2 == 0) - .setName("filter out odd numbers") - .writeTo(Sinks.logger()); -client.getJet().newJob(EvenNumberStream); ----- - -For details about data pipelines, see xref:pipelines:overview.adoc[]. - -=== Defining Client Labels - -You can define labels in your Java client, similar to the way it can -be done for the xref:management:cluster-utilities.adoc[members]. -Through the client labels, you can assign special roles for your clients and -use these roles to perform some actions specific to those client connections. - -You can also group your clients using the client labels. -These client groups can be blacklisted in the Hazelcast Management Center so that -they can be prevented from connecting to a cluster. See the related section in the -Hazelcast Management Center Reference Manual for more information about this topic. - -Declaratively, you can define the client labels using the `client-labels` -configuration element. See the below example. - -[tabs] -==== -XML:: -+ --- +[tabs] +==== +XML:: ++ +-- [source,xml] ---- ... - barClient - - - - - .... + true + ... ---- -- @@ -416,140 +574,38 @@ YAML:: [source,yaml] ---- hazelcast-client: - instance-name: barClient - client-labels: - - user - - bar ----- -==== - -The equivalent programmatic approach is shown below. - -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -clientConfig.setInstanceName("ExampleClientName"); -clientConfig.addLabel("user"); -clientConfig.addLabel("bar"); - -HazelcastClient.newHazelcastClient(clientConfig); + ... + cp-direct-to-leader-routing: true + ... ---- - -See the https://github.com/hazelcast/hazelcast-code-samples/tree/master/clients/client-labels[code sample^] -for the client labels to see them in action. - -=== Client Listeners - -You can configure listeners to listen to various event types on the client side. -You can configure global events not relating to any distributed object through -<>. -You should configure distributed object listeners like map entry listeners or -list item listeners through their proxies. See the related sections under -each distributed data structure in this Reference Manual. - -=== Client Transactions - -Transactional distributed objects are supported on the client side. -See xref:transactions:providing-xa-transactions.adoc[Transactions] for more details. - -[CAUTION] -.Deprecation Notice for Transactions ==== -Transactions have been deprecated, and will be removed as of Hazelcast version 7.0. An improved version of this feature is under consideration. If you are already using transactions, get in touch and share your use case. Your feedback will help us to develop a solution that meets your needs. -==== - -=== Async Start and Reconnect Modes - -Java client can be configured to connect to a cluster in an async manner during the -client start and reconnecting after a cluster disconnect. -Both of these options are configured via `ClientConnectionStrategyConfig`. - -Async client start is configured by setting the configuration element `async-start` to `true`. -This configuration changes the behavior of `HazelcastClient.newHazelcastClient()` call. -It returns a client instance without waiting to establish a cluster connection. -Until the client connects to cluster, it throws `HazelcastClientOfflineException` -on any network dependent operations hence they won't block. -If you want to check or wait the client to complete its cluster connection, -you can use the built-in lifecycle listener: - - -[source,java] ----- -ClientStateListener clientStateListener = new ClientStateListener(clientConfig); -HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig); - -//Client started but may not be connected to cluster yet. - -//check connection status -clientStateListener.isConnected(); - -//blocks until client completes connect to cluster -if (clientStateListener.awaitConnected()) { - //connected successfully -} else { - //client failed to connect to cluster -} ----- - -The Java client can also be configured to specify -how it reconnects after a cluster disconnection. -The following are the options: - -* A client can reject to reconnect to the cluster and trigger the client shutdown process. -* Client can open a connection to the cluster by blocking all waiting invocations. -* Client can open a connection to the cluster without blocking the waiting invocations. -All invocations receive `HazelcastClientOfflineException` during the establishment of cluster connection. -If cluster connection is failed to connect, then client shutdown is triggered. - -See the <> section to learn how to configure -these. - -== Configuring Java Client - -You can configure Hazelcast Java Client declaratively (XML), programmatically (API), or -using client system properties. - -For declarative configuration, the Hazelcast client looks at -the following places for the client configuration file: - -* **System property**: The client first checks if `hazelcast.client.config` system property is -set to a file path, e.g., `-Dhazelcast.client.config=C:/myhazelcast.xml`. -* **Classpath**: If config file is not set as a system property, -the client checks the classpath for `hazelcast-client.xml` file. -If the client does not find any configuration file, it starts with the default configuration -(`hazelcast-client-default.xml`) located in the `hazelcast.jar` library. -Before configuring the client, please try to work with the default configuration to see if -it works for you. The default should be just fine for most users. -If not, then consider custom configuration for your environment. +=== Java client connection strategy -If you want to specify your own configuration file to create a `Config` object, -the Hazelcast client supports the following: - -* `Config cfg = new XmlClientConfigBuilder(xmlFileName).build();` -* `Config cfg = new XmlClientConfigBuilder(inputStream).build();` +You can configure the client's starting mode as async or sync using +the configuration element `async-start`. When it is set to `true` (async), +Hazelcast creates the client without waiting for a connection to the cluster. +In this case, the client instance throws an exception until it connects to the cluster. +If it is `false`, the client is not created until the cluster is ready to use clients and +a connection with the cluster is established. The default value is `false` (sync) -For programmatic configuration of the Hazelcast Java Client, just instantiate a `ClientConfig` object and configure the desired aspects. An example is shown below: +You can also configure how the client reconnects to the cluster after a disconnection. +This is configured using the configuration element `reconnect-mode`, which has three options: -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -clientConfig.setClusterName("dev"); -clientConfig.setLoadBalancer(yourLoadBalancer); ----- +* `OFF`: disables the reconnection +* `ON`: enables reconnection in a blocking manner, where all waiting invocations are blocked until +a cluster connection is established or fails +* `ASYNC`: enables reconnection in a non-blocking manner, where all waiting invocations receive a `HazelcastClientOfflineException`. -=== Client Network +The default value for `reconnect-mode` is `ON`. -All network related configuration of Hazelcast Java Client is performed via the -`network` element in the declarative configuration file, or in the class -`ClientNetworkConfig` when using programmatic configuration. -Let's first give the examples for these two approaches. -Then we will look at its sub-elements and attributes. +NOTE: When you have `ASYNC` as the `reconnect-mode` and have defined a near cache for your client, the client functions [[non-stop-client]]without interruptions/downtime by communicating the data from its near cache, +provided that there is non-expired data in it. To learn how to add a near cache to your client, see <>. -**Declarative Configuration:** +The following declarative and programmatic configuration examples show how to configure +a Java client's starting and reconnecting modes: -Here is an example declarative configuration of `network` for Java Client, -which includes all the parent configuration elements. +==== Declarative configuration [tabs] ==== @@ -560,60 +616,7 @@ XML:: ---- ... - - -
127.0.0.1
-
127.0.0.2
-
- - 34600 - 34700-34710 - - - true - 60000 - - ... - - - ... - - - - ... - - - ... - - - ... - - - ... - - - ... - - - ... - - - ... - - - EXAMPLE_TOKEN - - - - - - foo - 123 - true - - - -
+ ...
---- @@ -624,78 +627,29 @@ YAML:: [source,yaml] ---- hazelcast-client: - network: - cluster-members: - - 127.0.0.1 - - 127.0.0.2 - outbound-ports: - - 34600 - - 34700-34710 - cluster-routing: - mode: ALL_MEMBERS - redo-operation: true - connection-timeout: 60000 - socket-options: - ... - socket-interceptor: - ... - ssl: - enabled: false - ... - aws: - enabled: true - connection-timeout-seconds: 11 - ... - gcp: - enabled: false - ... - azure: - enabled: false - ... - kubernetes: - enabled: false - ... - eureka: - enabled: false - ... - icmp-ping: - enabled: false - ... - hazelcast-cloud: - enabled: false - discovery-token: EXAMPLE_TOKEN - discovery-strategies: - node-filter: - class: DummyFilterClass - discovery-strategies: - - class: DummyDiscoveryStrategy1 - enabled: true - properties: - key-string: foo - key-int: 123 - key-boolean: true + connection-strategy: + async-start: true + reconnect-mode: ASYNC ---- ==== -**Programmatic Configuration:** - -Here is an example of configuring network for Java Client programmatically. +==== Programmatic configuration [source,java] ---- -include::ROOT:example$/clients/ExampleClientConfiguration.java[tag=scc] +ClientConfig clientConfig = new ClientConfig(); +clientConfig.getConnectionStrategyConfig() + .setAsyncStart(true) + .setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC); ---- -==== Configuring Backup Acknowledgment +=== Configure with CNAME -When an operation with sync backup is sent by a client to the Hazelcast member(s), -the acknowledgment of the operation's backup is sent to the client by the backup -replica member(s). This improves the performance of the client operations. +Using CNAME, you can change the hostname resolutions and use them dynamically. -If using the `ALL_MEMBERS` cluster routing mode, backup acknowledgement to the client is enabled by default. -However, neither the `MULTI_MEMBER` nor the `SINGLE_MEMBER` cluster routing modes support backup acknowledgement to the client. +As an example, assume that you have two clusters, Cluster A and Cluster B, and two Java clients. -Here is an example of configuring the backup acknowledgement for Java Client declaratively. +First, configure the Cluster A members as shown below: [tabs] ==== @@ -704,9 +658,18 @@ XML:: -- [source,xml] ---- - - false - + + ... + + + + clusterA.member1 + clusterA.member2 + + + + ... + ---- -- @@ -714,40 +677,16 @@ YAML:: + [source,yaml] ---- -hazelcast-client: - backup-ack-to-client: false +hazelcast: + network: + join: + tcp-ip: + enabled: true + members: clusterA.member1,clusterA.member2 ---- ==== -And here is its equivalent programmatic configuration. - -[source,java] ----- -clientConfig.setBackupAckToClientEnabled(boolean enabled) ----- - -You can also fine tune this feature using the following system properties: - -* `hazelcast.client.operation.backup.timeout.millis`: If an operation has sync -backups, this property specifies how long (in milliseconds) the invocation waits -for acks from the backup replicas. If acks are not received from some -of the backups, there will not be any rollback on the other successful replicas. -Its default value is `5000` milliseconds. -* `hazelcast.client.operation.fail.on.indeterminate.state`: When it is `true`, -if an operation has sync backups and acks are not received from backup replicas -in time, or the member which owns primary replica of the target partition leaves -the cluster, then the invocation fails. However, even if the invocation fails, -there will not be any rollback on other successful replicas. It is default -value is `false`. - -==== Configuring Address List - -Address List is the initial list of cluster addresses to which the client will connect. -The client uses this list to find an alive member. Although it may be enough to give -only one address of a member in the cluster (since all members communicate with each other), -it is recommended that you give the addresses for all the members. - -Declarative Configuration: +Next, configure the Cluster B members as shown below: [tabs] ==== @@ -756,16 +695,18 @@ XML:: -- [source,xml] ---- - + ... - -
10.1.1.21
-
10.1.1.22:5703
-
+ + + clusterB.member1 + clusterB.member2 + +
... -
+ ---- -- @@ -773,61 +714,32 @@ YAML:: + [source,yaml] ---- -hazelcast-client: +hazelcast: network: - cluster-members: - - 10.1.1.21 - - 10.1.1.22:5703 + join: + tcp-ip: + enabled: true + members: clusterB.member1,clusterB.member2 ---- ==== -Programmatic Configuration: - -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig.addAddress("10.1.1.21", "10.1.1.22:5703"); ----- - -If the port part is omitted, then 5701, 5702 and 5703 are tried in a random order. - -You can provide multiple addresses with ports provided or not, as seen above. -The provided list is shuffled and tried in random order. -Its default value is *localhost*. - -IMPORTANT: If you have multiple members on a single machine and you are using the -<>, we recommend that you set explicit -xref:clusters:network-configuration.adoc#port[ports] for each member. Then you should provide those ports in your client configuration -when you give the member addresses (using the `address` configuration element or -`addAddress` method as exemplified above). This provides faster connections between clients and members. Otherwise, -all the load coming from your clients may go through a single member. - -==== Setting Outbound Ports - -You may want to restrict outbound ports to be used by Hazelcast-enabled applications. -To fulfill this requirement, you can configure Hazelcast Java client to use only defined outbound ports. -The following are example configurations. - -Declarative Configuration: +Now, configure the two clients as shown below: [tabs] ==== -XML:: +Client 1 XML:: + -- [source,xml] ---- ... + cluster-a - - - 34700-34710 - - 34700,34701,34702,34703 - 34700,34705-34710 - + +
production1.myproject
+
production2.myproject
+
...
@@ -839,62 +751,29 @@ YAML:: [source,yaml] ---- hazelcast-client: + cluster-name: cluster-a network: - outbound-ports: - - 34700-34710 - - 34700,34701,34702,34703 - - 34700,34705-34710 + cluster-members: + - production1.myproject + - production2.myproject ---- ==== -Programmatic Configuration: - -[source,java] ----- -... -NetworkConfig networkConfig = config.getNetworkConfig(); -// ports between 34700 and 34710 -networkConfig.addOutboundPortDefinition("34700-34710"); -// comma separated ports -networkConfig.addOutboundPortDefinition("34700,34701,34702,34703"); -networkConfig.addOutboundPort(34705); -... ----- - -NOTE: You can use port ranges and/or comma separated ports. - -As shown in the programmatic configuration, you use the method `addOutboundPort` to -add only one port. If you need to add a group of ports, then use the method `addOutboundPortDefinition`. - -In the declarative configuration, the element `ports` can be used for -both single and multiple port definitions. - -==== Configure Cluster Routing Mode - -You can configure the cluster routing mode to suit your requirements, as described in <>. - -The following examples show the configuration for each cluster routing mode. - -NOTE: If your clients want to use temporary permissions defined in a member, see -xref:security:native-client-security.adoc#handling-permissions-when-a-new-member-joins[Handling Permissions]. - -**ALL_MEMBERS** - -To connect to all members, use the `ALL_MEMBERS` cluster routing mode, which can be defined as follows. - -Declarative Configuration: - [tabs] ==== -XML:: +Client 2 XML:: + -- [source,xml] ---- ... + cluster-b - + +
production1.myproject
+
production2.myproject
+
...
@@ -906,36 +785,17 @@ YAML:: [source,yaml] ---- hazelcast-client: + cluster-name: cluster-b network: - cluster-routing: - mode: ALL_MEMBERS + cluster-members: + - production1.myproject + - production2.myproject ---- ==== -Programmatic Configuration: - -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.ALL_MEMBERS); ----- - -**SINGLE_MEMBER** - -To connect to a single member, which can be used as a gateway to the other members, use the `SINGLE_MEMBER` cluster routing mode, which can be defined as described below. - -When using the `SINGLE_MEMBER` cluster routing mode, consider the following: - -* The absence of <>, as the client does not have a view of the entire cluster -* If you have multiple members on a single machine, we advise that <> -* If CP group leader priority is assigned appropriately, and the client is explicitly set to connect to a CP group leader, -connections to the xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem] are direct-to-leader, which can result in improved performance. -If leadership is reassigned while using `SINGLE_MEMBER` cluster routing, then this benefit may be lost. -* <> configuration is ignored -* xref:cluster-performance:thread-per-core-tpc.adoc[Thread-Per-Core] is not supported for `SINGLE_MEMBER` cluster routing and no benefit will be gained by enabling it with this routing mode. - -Declarative Configuration: +Assuming that the client configuration filenames for the above example clients are +`hazelcast-client-c1.xml/yaml` and `hazelcast-client-c2.xml/yaml`, you should configure the +client failover for a blue-green deployment scenario as follows: [tabs] ==== @@ -944,13 +804,13 @@ XML:: -- [source,xml] ---- - - ... - - - - ... - + + 4 + + hazelcast-client-c1.xml + hazelcast-client-c2.xml + + ---- -- @@ -958,46 +818,45 @@ YAML:: + [source,yaml] ---- -hazelcast-client: - network: - cluster-routing: - mode: SINGLE_MEMBER +hazelcast-client-failover: + try-count: 4 + clients: + - hazelcast-client-c1.yaml + - hazelcast-client-c2.yaml ---- ==== -Programmatic Configuration: +NOTE: You can find the complete Hazelcast client failover +example configuration file (`hazelcast-client-failover-full-example`) +both in XML and YAML formats including the descriptions of elements and attributes, +in the `/bin` directory of your Hazelcast download directory. -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.SINGLE_MEMBER); ----- +You should also configure your clients to forget DNS lookups using the +https://docs.oracle.com/javase/7/docs/technotes/guides/net/properties.html[networkaddress.cache.ttl JVM parameter]. -**MULTI_MEMBER** +You should also configure the addresses in your clients' configuration to resolve to the hostnames of +Cluster A via CNAME so that the clients will connect to Cluster A when it starts: -To connect to a subset partition grouping of members, which allows direct connection to the specified group and gateway connections to other members, use the `MULTI_MEMBER` cluster routing mode, which can be defined as follows. +`production1.myproject` → `clusterA.member1` -To use the `MULTI_MEMBER` cluster routing mode, you must also define the grouping strategy to apply. For further information on configuring partition groups, see xref:clusters:partition-group-configuration.adoc[]. +`production2.myproject` → `clusterA.member2` -When using the `MULTI_MEMBER` cluster routing mode, consider the following: +When you want the clients to switch to the other cluster, change the mapping as follows: + +`production1.myproject` → `clusterB.member1` -* The <>, which failover to another partition group where one is available. -No retry attempt is made to connect to the lost member(s) -+ -In a split and heal scenario, where the client has no access to other group members, the client is re-assigned to the initial group. -+ -In a scenario where all group members are killed almost simultaneously, the client loses connection but reconnects when a member starts again. +`production2.myproject` → `clusterB.member2` -* The absence of <>, as the client does not have a view of the entire cluster -If <> is enabled on your clients, and the `ADVANCED_CP` license -is present on your Enterprise cluster, then clients in this routing mode can use this to send CP operations directly -to group leaders wherever possible, even after leadership changes. -* Best efforts are made to route operations to the required member, but if this cannot be done operations are routed as defined in the <> +Wait for the time you configured using the `networkaddress.cache.ttl` JVM parameter for +the client JVM to forget the old mapping. -* xref:cluster-performance:thread-per-core-tpc.adoc[Thread-Per-Core] is not supported for `MULTI_MEMBER` cluster routing and may lead to event inconsistency if used. +Finally, blocklist the clients in Cluster A using Hazelcast Management Center. -Declarative Configuration: +=== Configure without CNAME + +Review these example configurations and the descriptions that follow: + +==== Declarative configuration [tabs] ==== @@ -1006,15 +865,13 @@ XML:: -- [source,xml] ---- - - ... - - - PARTITION_GROUPS - - - ... - + + 4 + + hazelcast-client-c1.xml + hazelcast-client-c2.xml + + ---- -- @@ -1022,79 +879,95 @@ YAML:: + [source,yaml] ---- -hazelcast-client: - network: - cluster-routing: - mode: MULTI_MEMBER - grouping-strategy: PARTITION_GROUPS +hazelcast-client-failover: + try-count: 4 + clients: + - hazelcast-client-c1.yaml + - hazelcast-client-c2.yaml ---- ==== -Programmatic Configuration: +==== Programmatic configuration [source,java] ---- ClientConfig clientConfig = new ClientConfig(); +clientConfig.setClusterName("cluster-a"); ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.MULTI_MEMBER); -// PARTITION_GROUPS is the default strategy, so it does not need to be explicitly defined -networkConfig.getClusterRoutingConfig().setRoutingStrategy(RoutingStrategy.PARTITION_GROUPS); +networkConfig.addAddress("10.216.1.18", "10.216.1.19"); + +ClientConfig clientConfig2 = new ClientConfig(); +clientConfig2.setClusterName("cluster-b"); +ClientNetworkConfig networkConfig2 = clientConfig2.getNetworkConfig(); +networkConfig2.addAddress( "10.214.2.10", "10.214.2.11"); + +ClientFailoverConfig clientFailoverConfig = new ClientFailoverConfig(); +clientFailoverConfig.addClientConfig(clientConfig).addClientConfig(clientConfig2).setTryCount(10) +HazelcastInstance client = HazelcastClient.newHazelcastFailoverClient(clientFailoverConfig); ---- -TIP: If you are using the `smart` or `unisocket` client operation modes, select **5.4** from the version picker above the navigation pane to see the configuration information. The cluster routing mode described above must not be present in your configuration. +For more information on the configuration elements, see the following descriptions: -==== Enabling Redo Operation +* `try-count`: count of connection retries by the client to the alternative clusters. -It enables/disables redo-able operations as described in -<>. -The following are the example configurations. +When this value is reached, the client shuts down if it can't connect to a cluster. This value also applies to alternative clusters configured by the `client` element. For the above example, two alternative clusters are given +with the `try-count` set as `4`. This means the number of connection attempts is 4 x 2 = 8. -Declarative Configuration: +* `client`: path to the client configuration that corresponds to an alternative cluster that the client will try to connect to. -[tabs] -==== -XML:: -+ --- -[source,xml] ----- - - ... - - true - - ... - ----- --- +The client configurations must be exactly the same **except** for the following configuration options: -YAML:: -+ -[source,yaml] ----- -hazelcast-client: - network: - redo-operation: true ----- -==== +* `SecurityConfig` +* `NetworkConfig.Addresses` +* `NetworkConfig.SocketInterceptorConfig` +* `NetworkConfig.SSLConfig` +* `NetworkConfig.AwsConfig` +* `NetworkConfig.GcpConfig` +* `NetworkConfig.AzureConfig` +* `NetworkConfig.KubernetesConfig` +* `NetworkConfig.EurekaConfig` +* `NetworkConfig.CloudConfig` +* `NetworkConfig.DiscoveryConfig` -Programmatic Configuration: +You can also configure it within the Spring context, as shown below: -[source,java] +[source,xml] ---- -ClientConfig clientConfig = new ClientConfig(); -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig().setRedoOperation(true); + + + + + + 127.0.0.1:5700 + 127.0.0.1:5701 + + + + + + + 127.0.0.1:5702 + 127.0.0.1:5703 + + + + + ---- -Its default value is `false` (disabled). -==== Setting Connection Timeout -Connection timeout is the timeout value in milliseconds for members to -accept client connection requests. The following are the example configurations. -Declarative Configuration: +== Configure client network +=== Configuration options + +You can manage all network-related configuration setting using either the `network` element (declarative) or the `ClientNetworkConfig` class (programmatic). + +This section provides full examples for these two approaches, and then looks at the sub-elements and attributes in detail. + +==== Declarative configuration + +The following declarative `network` configuration examples include all the public configuration APIs/methods: [tabs] ==== @@ -1106,7 +979,58 @@ XML:: ... - 5000 + +
127.0.0.1
+
127.0.0.2
+
+ + 34600 + 34700-34710 + + + true + 60000 + + ... + + + ... + + + + ... + + + ... + + + ... + + + ... + + + ... + + + ... + + + ... + + + EXAMPLE_TOKEN + + + + + + foo + 123 + true + + +
...
@@ -1119,118 +1043,133 @@ YAML:: ---- hazelcast-client: network: - connection-timeout: 5000 + cluster-members: + - 127.0.0.1 + - 127.0.0.2 + outbound-ports: + - 34600 + - 34700-34710 + cluster-routing: + mode: ALL_MEMBERS + redo-operation: true + connection-timeout: 60000 + socket-options: + ... + socket-interceptor: + ... + ssl: + enabled: false + ... + aws: + enabled: true + connection-timeout-seconds: 11 + ... + gcp: + enabled: false + ... + azure: + enabled: false + ... + kubernetes: + enabled: false + ... + eureka: + enabled: false + ... + icmp-ping: + enabled: false + ... + hazelcast-cloud: + enabled: false + discovery-token: EXAMPLE_TOKEN + discovery-strategies: + node-filter: + class: DummyFilterClass + discovery-strategies: + - class: DummyDiscoveryStrategy1 + enabled: true + properties: + key-string: foo + key-int: 123 + key-boolean: true ---- ==== -Programmatic Configuration: +==== Programmatic configuration + +The following example programmatic `network` configuration includes all the parent configuration attributes: +// is attributes the right term here? [source,java] ---- -ClientConfig clientConfig = new ClientConfig(); -clientConfig.getNetworkConfig().setConnectionTimeout(5000); +include::ROOT:example$/clients/ExampleClientConfiguration.java[tag=scc] ---- -Its default value is *5000* milliseconds. - -==== Setting a Socket Interceptor +The following sections include details and usage examples for sub-elements and attributes. -[blue]*Hazelcast {enterprise-product-name}* - -Following is a client configuration to set a socket interceptor. -Any class implementing `com.hazelcast.nio.SocketInterceptor` is a socket interceptor. +=== Configure backup acknowledgment +// check standalone -[source,java] ----- -public interface SocketInterceptor { - void init(Properties properties); - void onConnect(Socket connectedSocket) throws IOException; -} ----- +When an operation with sync backup is sent by a client to the Hazelcast member(s), +the acknowledgment of the operation's backup is sent to the client by the backup +replica member(s). This improves the performance of the client operations. -`SocketInterceptor` has two steps. First, it is initialized by the configured properties. -Second, it is informed just after the socket is connected using the `onConnect` method. +If using the `ALL_MEMBERS` cluster routing mode, backup acknowledgement to the client is enabled by default. +However, neither the `MULTI_MEMBER` nor the `SINGLE_MEMBER` cluster routing modes support backup acknowledgement to the client. +The following declarative example shows how to configure backup acknowledgement: -[source,java] +[tabs] +==== +XML:: ++ +-- +[source,xml] ---- -SocketInterceptorConfig socketInterceptorConfig = clientConfig - .getNetworkConfig().getSocketInterceptorConfig(); - -MyClientSocketInterceptor myClientSocketInterceptor = new MyClientSocketInterceptor(); - -socketInterceptorConfig.setEnabled(true); -socketInterceptorConfig.setImplementation(myClientSocketInterceptor); + + false + ---- +-- -If you want to configure the socket interceptor with a class name instead of an instance, -see the example below. - -[source,java] +YAML:: ++ +[source,yaml] ---- -SocketInterceptorConfig socketInterceptorConfig = clientConfig - .getNetworkConfig().getSocketInterceptorConfig(); - -socketInterceptorConfig.setEnabled(true); - -//These properties are provided to interceptor during init -socketInterceptorConfig.setProperty("kerberos-host","kerb-host-name"); -socketInterceptorConfig.setProperty("kerberos-config-file","kerb.conf"); - -socketInterceptorConfig.setClassName(MyClientSocketInterceptor.class.getName()); +hazelcast-client: + backup-ack-to-client: false ---- +==== -NOTE: See the xref:security:socket-interceptor.adoc[Socket Interceptor section] for more information. - -==== Configuring Network Socket Options - -You can configure the network socket options using `SocketOptions`. It has the following methods: - -* `socketOptions.setKeepAlive(x)`: Enables/disables the *SO_KEEPALIVE* socket option. -Its default value is `true`. -* `socketOptions.setTcpNoDelay(x)`: Enables/disables the *TCP_NODELAY* socket option. -Its default value is `true`. -* `socketOptions.setReuseAddress(x)`: Enables/disables the *SO_REUSEADDR* socket option. -Its default value is `true`. -* `socketOptions.setLingerSeconds(x)`: Enables/disables *SO_LINGER* with the specified linger time in seconds. -Its default value is `3`. -* `socketOptions.setBufferSize(x)`: Sets the *SO_SNDBUF* and *SO_RCVBUF* options to the specified value in KB for this Socket. -Its default value is `32`. - +The following programmatic example shows how to configure backup acknowledgement: [source,java] ---- -SocketOptions socketOptions = clientConfig.getNetworkConfig().getSocketOptions(); -socketOptions.setBufferSize(32) - .setKeepAlive(true) - .setTcpNoDelay(true) - .setReuseAddress(true) - .setLingerSeconds(3); +clientConfig.setBackupAckToClientEnabled(boolean enabled) ---- -==== Enabling Client TLS/SSL - -[blue]*Hazelcast {enterprise-product-name}* - -You can use TLS/SSL to secure the connection between the client and the members. -If you want TLS/SSL enabled for the client-cluster connection, you should set `SSLConfig`. -Once set, the connection (socket) is established out of an TLS/SSL factory defined either by -a factory class name or factory implementation. See the xref:security:tls-ssl.adoc[TLS/SSL section]. +You can also fine tune this feature using the following system properties: -As explained in the TLS/SSL section, Hazelcast members have keyStores used to -identify themselves (to other members) and Hazelcast clients have trustStore used to -define which members they can trust. The clients also have their keyStores and -members have their trustStores so that the members can -know which clients they can trust: see the xref:security:tls-ssl.adoc#mutual-authentication[Mutual Authentication section]. +* `hazelcast.client.operation.backup.timeout.millis`: if an operation has sync +backups, this property specifies how long (in milliseconds) the invocation waits +for acks from the backup replicas. If acks are not received from some +of the backups, there will not be any rollback on the other successful replicas. +The default value is `5000` milliseconds. +* `hazelcast.client.operation.fail.on.indeterminate.state`: when `true`, +if an operation has sync backups and acks are not received from backup replicas +in time, or the member which owns the primary replica of the target partition leaves +the cluster, then the invocation fails. However, even if the invocation fails, +there will not be any rollback on other successful replicas. The default value is `false`. -==== Configuring Hazelcast {hazelcast-cloud} +=== Configure address list -You can connect your Java client to a {hazelcast-cloud} Standard cluster which is hosted on link:{url-cloud-signup}[{hazelcast-cloud}]. -For this, you simply enable {hazelcast-cloud} and specify the cluster's discovery token provided while creating the cluster; this allows the cluster to discover your clients. -See the following example configurations. +The address List is the initial list of cluster addresses to which the client will connect. +The client uses this list to find an alive member. Although it may be enough to give +only one address of a member in the cluster (since all members communicate with each other), +we recommended that you add the addresses for all the members. -Declarative Configuration: +==== Declarative configuration [tabs] ==== @@ -1242,10 +1181,10 @@ XML:: ... - - - YOUR_TOKEN - + +
10.1.1.21
+
10.1.1.22:5703
+
...
@@ -1258,57 +1197,39 @@ YAML:: ---- hazelcast-client: network: - ssl: - enabled: true - hazelcast-cloud: - enabled: true - discovery-token: YOUR_TOKEN + cluster-members: + - 10.1.1.21 + - 10.1.1.22:5703 ---- ==== -Programmatic Configuration: +==== Programmatic configuration [source,java] ---- -ClientConfig config = new ClientConfig(); -ClientNetworkConfig networkConfig = config.getNetworkConfig(); -networkConfig.getCloudConfig().setDiscoveryToken("TOKEN").setEnabled(true); -networkConfig.setSSLConfig(new SSLConfig().setEnabled(true)); -HazelcastInstance client = HazelcastClient.newHazelcastClient(config); +ClientConfig clientConfig = new ClientConfig(); +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +networkConfig.addAddress("10.1.1.21", "10.1.1.22:5703"); ---- -{hazelcast-cloud} is disabled for the Java client, by default (`enabled` attribute is `false`). - -See xref:cloud:ROOT:overview.adoc[Hazelcast {hazelcast-cloud}] for more information about {hazelcast-cloud}. - -NOTE: Since this is a REST based discovery, you need to enable the REST listener service. -See the xref:clients:rest.adoc#using-the-rest-endpoint-groups[REST Endpoint Groups section] on how to enable REST endpoints. - -include::partial$rest-deprecation.adoc[] - -[NOTE] -==== -It is advised to enable certificate revocation status JRE-wide, for security reasons. -You need to set the following Java system properties to `true`: - -* `com.sun.net.ssl.checkRevocation` -* `com.sun.security.enableCRLDP` - -And you need to set the Java security property as follows: +You can add addresses with or without the port number. If the port is omitted, then the default ports (5701, 5702, 5703) are tried in random order. -`Security.setProperty("ocsp.enable", "true")` +The address list is tried in random order. The default value is `localhost`. -You can find more details on the related security topics -http://docs.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CERTPATH[here^] and -http://docs.oracle.com/javase/6/docs/technotes/guides/security/certpath/CertPathProgGuide.html#AppC[here^]. -==== +IMPORTANT: If you have multiple members on a single machine and you are using the +<>, we recommend that you set explicit +xref:clusters:network-configuration.adoc#port[ports] for each member. Then you should provide those ports in your client configuration +when you give the member addresses (using the `address` configuration element or +`addAddress` method as exemplified above). This provides faster connections between clients and members. Otherwise, +all the load coming from your clients may go through a single member. -==== Configuring Client for AWS +=== Set outbound ports -The example declarative and programmatic configurations below show -how to configure a Java client for connecting to a Hazelcast cluster in AWS. +You may want to restrict outbound ports to be used by Hazelcast-enabled applications. +To fulfill this requirement, you can configure Hazelcast Java client to use only defined outbound ports. +The following are example configurations. -Declarative Configuration: +Declarative configuration: [tabs] ==== @@ -1320,16 +1241,13 @@ XML:: ... - - true - my-access-key - my-secret-key - us-west-1 - ec2.amazonaws.com - hazelcast-sg - type - hz-members - + + + 34700-34710 + + 34700,34701,34702,34703 + 34700,34705-34710 + ... @@ -1342,301 +1260,223 @@ YAML:: ---- hazelcast-client: network: - aws: - enabled: true - use-public-ip: true - access-key: my-access-key - secret-key: my-secret-key - region: us-west-1 - host-header: ec2.amazonaws.com - security-group-name: hazelcast-sg - tag-key: type - tag-value: hz-members + outbound-ports: + - 34700-34710 + - 34700,34701,34702,34703 + - 34700,34705-34710 ---- ==== -Programmatic Configuration: +Programmatic configuration: [source,java] ---- -include::ROOT:example$/clients/ExampleClientAwsConfig.java[tag=clientaws] +... +NetworkConfig networkConfig = config.getNetworkConfig(); +// ports between 34700 and 34710 +networkConfig.addOutboundPortDefinition("34700-34710"); +// comma separated ports +networkConfig.addOutboundPortDefinition("34700,34701,34702,34703"); +networkConfig.addOutboundPort(34705); +... ---- -See the xref:clusters:network-configuration.adoc#aws-element[aws element section] for the descriptions of -the above AWS configuration elements except `use-public-ip`. +NOTE: You can use port ranges and/or comma separated ports. -If the `use-public-ip` element is set to `true`, the private addresses of cluster members -are always converted to public addresses. Also, the client uses public addresses to -connect to the members. In order to use private addresses, set the `use-public-ip` parameter to `false`. -Also note that, when connecting outside from AWS, setting the `use-public-ip` parameter to `false` causes -the client to not be able to reach the members. +As shown in the programmatic configuration, you use the method `addOutboundPort` to +add only one port. If you need to add a group of ports, then use the method `addOutboundPortDefinition`. -=== Configuring Client Load Balancer +In the declarative configuration, the element `ports` can be used for +both single and multiple port definitions. -`LoadBalancer` allows you to send operations to one of a number of endpoints (Members). -Its main purpose is to determine the next `Member` if queried. -It is up to your implementation to use different load balancing policies. -You should implement the interface `com.hazelcast.client.LoadBalancer` for that purpose. +=== Configure cluster routing mode -For <>, the behaviour is as follows: +You can configure the cluster routing mode to suit your requirements, as described in <>. -* If set to `ALL_MEMBERS` only the operations that are not -key-based are routed to the endpoint that is returned by the `LoadBalancer` -* If set to `SINGLE_MEMBER`, `LoadBalancer` is ignored -* If set to `MULTI_MEMBER`, best efforts are made to route operations to the required member. -If this cannot be done for any reason, operations are routed as defined in the `LoadBalancer` +The following examples show the configuration for each cluster routing mode. -NOTE: If you are using the smart or unisocket client operation modes, select 5.4 from the version picker -above the navigation pane to see the relevant information. +NOTE: If your clients want to use temporary permissions defined in a member, see +xref:security:native-client-security.adoc#handling-permissions-when-a-new-member-joins[Handling Permissions]. -The following are example configurations. +**Client ALL_MEMBERS routing** -Declarative Configuration: +To connect to all members, use the `ALL_MEMBERS` cluster routing mode, which can be defined as follows. [tabs] ==== XML:: + -- +Declarative configuration: [source,xml] ---- ... - + + + ... ---- -- - YAML:: + +-- +Declarative configuration: [source,yaml] ---- hazelcast-client: - load-balancer: - type: random + network: + cluster-routing: + mode: ALL_MEMBERS ---- -==== - -Programmatic Configuration: - +-- +JAVA:: ++ +-- +Programmatic configuration: [source,java] ---- ClientConfig clientConfig = new ClientConfig(); -clientConfig.setLoadBalancer(yourLoadBalancer); +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.ALL_MEMBERS); ---- +-- +==== + +**Client SINGLE_MEMBER routing** -=== Configuring Client Listeners +To connect to a single member, which can be used as a gateway to the other members, use the `SINGLE_MEMBER` cluster routing mode, which can be defined as described below. -You can configure global event listeners using `ListenerConfig` as shown below. +When using the `SINGLE_MEMBER` cluster routing mode, consider the following: -[source,java] +* The absence of <>, as the client does not have a view of the entire cluster +* If you have multiple members on a single machine, we advise that <> +* If CP group leader priority is assigned appropriately, and the client is explicitly set to connect to a CP group leader, +connections to the xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem] are direct-to-leader, which can result in improved performance. +If leadership is reassigned while using `SINGLE_MEMBER` cluster routing, then this benefit may be lost. +* <> configuration is ignored +* xref:cluster-performance:thread-per-core-tpc.adoc[Thread-Per-Core] is not supported for `SINGLE_MEMBER` cluster routing and no benefit will be gained by enabling it with this routing mode. + +[tabs] +==== +XML:: ++ +-- +Declarative configuration: +[source,xml] ---- -ClientConfig clientConfig = new ClientConfig(); -ListenerConfig listenerConfig = new ListenerConfig(LifecycleListenerImpl); -clientConfig.addListenerConfig(listenerConfig); + + ... + + + + ... + ---- - +-- +YAML:: ++ +-- +Declarative configuration: +[source,yaml] +---- +hazelcast-client: + network: + cluster-routing: + mode: SINGLE_MEMBER +---- +-- +JAVA:: ++ +-- +Programmatic configuration: [source,java] ---- ClientConfig clientConfig = new ClientConfig(); -ListenerConfig listenerConfig = new ListenerConfig("com.hazelcast.example.MembershipListenerImpl"); -clientConfig.addListenerConfig(listenerConfig); +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.SINGLE_MEMBER); ---- +-- +==== -You can add the following types of event listeners: - -* LifecycleListener -* MembershipListener -* DistributedObjectListener - -=== Configuring Client Near Cache - -The Hazelcast distributed map supports a local Near Cache for remotely stored entries to -increase the performance of local read operations. Since the client always requests data from -the cluster members, it can be helpful in some use cases to configure a Near Cache on the client side. -See the xref:performance:near-cache.adoc[Near Cache section] for a detailed explanation of the Near Cache feature and its configuration. - -=== Configuring Client Cluster Name - -Clients should provide a cluster name in order to connect to the cluster. -You can configure it using `ClientConfig`, as shown below. - -``` -clientConfig.setClusterName("dev"); -``` - -[[client-security-configuration]] -=== Configuring Client Security - -In the cases where the security established with `Config` is not enough and -you want your clients connecting securely to the cluster, you can use `ClientSecurityConfig`. -This configuration has a `credentials` parameter to set the IP address and UID. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[ClientSecurityConfig Javadoc^]. - -[[client-serialization-configuration]] -=== Client Serialization Configuration - -For the client side serialization, use the Hazelcast configuration. -See the xref:serialization:serialization.adoc[Serialization chapter]. - -[[classloader]] -=== Configuring ClassLoader - -You can configure a custom `classLoader`. -It is used by the serialization service and to load any class configured in configuration, such as -event listeners or ProxyFactories. +**Client MULTI_MEMBER routing** -[[configuring-reliable-topic-at-client-side]] -=== Configuring Reliable Topic on the Client Side +To connect to a subset partition grouping of members, which allows direct connection to the specified group and gateway connections to other members, use the `MULTI_MEMBER` cluster routing mode, which can be defined as follows. -Normally when a client uses a Hazelcast data structure, -that structure is configured on the member side and the client makes use of that configuration. -For the Reliable Topic structure, this is not the case; since it is backed by Ringbuffer, -you should configure it on the client side. The class used for this configuration is `ClientReliableTopicConfig`. +To use the `MULTI_MEMBER` cluster routing mode, you must also define the grouping strategy to apply. For further information on configuring partition groups, see xref:clusters:partition-group-configuration.adoc[]. -Here is an example programmatic configuration snippet: +When using the `MULTI_MEMBER` cluster routing mode, consider the following: -[source,java] ----- -include::ROOT:example$/clients/ExampleRTClient.java[tag=rtclient] ----- +* The <>, which failover to another partition group where one is available. +No retry attempt is made to connect to the lost member(s) ++ +In a split and heal scenario, where the client has no access to other group members, the client is re-assigned to the initial group. ++ +In a scenario where all group members are killed almost simultaneously, the client loses connection but reconnects when a member starts again. -Note that, when you create a Reliable Topic structure on your client, a Ringbuffer -(with the same name as the Reliable Topic) is automatically created on the member side, -with its default configuration. See the xref:data-structures:ringbuffer.adoc[Configuring Ringbuffer section] for the defaults. -You can edit that configuration according to your needs. +* The absence of <>, as the client does not have a view of the entire cluster +If <> is enabled on your clients, and the `ADVANCED_CP` license +is present on your Enterprise cluster, then clients in this routing mode can use this to send CP operations directly +to group leaders wherever possible, even after leadership changes. +* Best efforts are made to route operations to the required member, but if this cannot be done operations are routed as defined in the <> -You can configure a Reliable Topic structure on the client side also declaratively. -The following is the declarative configuration equivalent to the above example: +* xref:cluster-performance:thread-per-core-tpc.adoc[Thread-Per-Core] is not supported for `MULTI_MEMBER` cluster routing and may lead to event inconsistency if used. [tabs] ==== XML:: + -- +Declarative configuration: [source,xml] ---- ... - - 10000000 - 5 - - - BLOCK - 10 - + + + PARTITION_GROUPS + + ... ---- -- - YAML:: + +-- +Declarative configuration: [source,yaml] ---- hazelcast-client: - ringbuffer: - default: - capacity: 10000000 - time-to-live-seconds: 5 - reliable-topic: - default: - topic-overload-policy: BLOCK - read-batch-size: 10 + network: + cluster-routing: + mode: MULTI_MEMBER + grouping-strategy: PARTITION_GROUPS ---- +-- +JAVA:: ++ +-- +Programmatic configuration: +[source,java] +---- +ClientConfig clientConfig = new ClientConfig(); +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +networkConfig.getClusterRoutingConfig().setRoutingMode(RoutingMode.MULTI_MEMBER); +// PARTITION_GROUPS is the default strategy, so it does not need to be explicitly defined +networkConfig.getClusterRoutingConfig().setRoutingStrategy(RoutingStrategy.PARTITION_GROUPS); +---- +-- ==== -[[configuring-direct-to-leader-routing]] -=== Configuring CP direct-to-leader Operation Routing For Clients +TIP: If you are using the `smart` or `unisocket` client operation modes, select **5.4** from the version picker above the navigation pane to see the configuration information. The cluster routing mode described above must not be present in your configuration. -When operating a Hazelcast Enterprise cluster with the `ADVANCED_CP` license it is possible to configure clients to -leverage direct to leader routing for xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem] operations. When enabled, -this functionality allows clients to receive a mapping of CP group leadership from the cluster and use it to send -CP data structure operations directly to the relevant group leader. This leadership mapping is also updated whenever -leadership changes occur. - -CP data structure reads and writes must be actioned by the CP leader responsible for the group involved. By leveraging -direct to leader routing for CP operations, clients will be able to send all operations directly to their group leaders, -cutting out the need for intermediate hops through other cluster members. This allows clients to achieve lower latency and -higher throughput for their CP operations, while also reducing the pressure on the internal cluster network, resulting in -greater cluster stability. - -This functionality is disabled by default and must be explicitly enabled. This is done because you should consider your -specific use-case for CP operation sending and assess the impact of direct to leader routing on your topology. In scenarios -where clients have increased latency to CP group leaders, it may be detrimental to route all operations directly to them -instead of using a faster internal cluster link and routing through another member. It should also be considered that -direct to leader routing can put uneven pressure on the cluster if CP group leaders receive substantially more load than -other members of the cluster - this is particularly problematic when only one CP group leader is present. - -NOTE: If a client does not have an active connection to a known CP group leader then the client will be unable to leverage -direct-to-leader CP operations and will fall back to default round-robin behaviour, sending the request to any available -cluster member instead. This feature provides no benefit when `SINGLE_MEMBER` routing is used as the client only has 1 -available connection to use for all operation sending. - -CP direct to leader routing can be enabled on clients with a single configuration option. Here is an example programmatic -configuration snippet: - -[source,java] ----- -ClientConfig clientConfig = new ClientConfig(); -clientConfig.setCPDirectToLeaderRoutingEnabled(true); ----- - -The following is the declarative configuration equivalent of the above example: - -[tabs] -==== -XML:: -+ --- -[source,xml] ----- - - ... - true - ... - ----- --- - -YAML:: -+ -[source,yaml] ----- -hazelcast-client: - ... - cp-direct-to-leader-routing: true - ... ----- -==== - -== Java Client Connection Strategy - -You can configure the client's starting mode as async or sync using -the configuration element `async-start`. When it is set to `true` (async), -Hazelcast creates the client without waiting a connection to the cluster. -In this case, the client instance throws an exception until it connects to the cluster. -If it is `false`, the client is not created until the cluster is ready to use clients and -a connection with the cluster is established. Its default value is `false` (sync) +=== Enable redo operations -You can also configure how the client reconnects to the cluster after a disconnection. -This is configured using the configuration element `reconnect-mode`; it has three options -(`OFF`, `ON` or `ASYNC`). The option `OFF` disables the reconnection. -`ON` enables reconnection in a blocking manner where all the waiting invocations are blocked until -a cluster connection is established or failed. -The option `ASYNC` enables reconnection in a non-blocking manner where -all the waiting invocations receive a `HazelcastClientOfflineException`. -Its default value is `ON`. - -NOTE: When you have `ASYNC` as the `reconnect-mode` and defined a Near Cache for your client, -the client functions [[non-stop-client]]without interruptions/downtime by communicating the data from its Near Cache, -provided that there is non-expired data in it. See <> to -learn how you can add a Near Cache to your client. - -The example declarative and programmatic configurations below show how to configure -a Java client's starting and reconnecting modes. +It enables/disables redo-able operations as described in +<>. +The following are the example configurations. Declarative Configuration: @@ -1649,7 +1489,9 @@ XML:: ---- ... - + + true + ... ---- @@ -1660,9 +1502,8 @@ YAML:: [source,yaml] ---- hazelcast-client: - connection-strategy: - async-start: true - reconnect-mode: ASYNC + network: + redo-operation: true ---- ==== @@ -1671,21 +1512,18 @@ Programmatic Configuration: [source,java] ---- ClientConfig clientConfig = new ClientConfig(); -clientConfig.getConnectionStrategyConfig() - .setAsyncStart(true) - .setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC); +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +networkConfig().setRedoOperation(true); ---- -=== Configuring Client Connection Retry +Its default value is `false` (disabled). -When the client is disconnected from the cluster or trying to connect to a one -for the first time, it searches for new connections. You can configure the frequency -of the connection attempts and client shutdown behavior using -`ConnectionRetryConfig` (programmatic approach)/`connection-retry` (declarative approach). +=== Set connection timeout -Below are the example configurations for each. +Connection timeout is the timeout value in milliseconds for members to +accept client connection requests. -Declarative Configuration: +The following code shows a declarative example configuration: [tabs] ==== @@ -1696,15 +1534,9 @@ XML:: ---- ... - - - 1000 - 60000 - 2 - 50000 - 0.2 - - + + 5000 + ... ---- @@ -1715,205 +1547,126 @@ YAML:: [source,yaml] ---- hazelcast-client: - connection-strategy: - async-start: false - reconnect-mode: ON - connection-retry: - initial-backoff-millis: 1000 - max-backoff-millis: 60000 - multiplier: 2 - cluster-connect-timeout-millis: 50000 - jitter: 0.2 + network: + connection-timeout: 5000 ---- ==== -Programmatic Configuration: +The following code shows a programmatic example configuration: [source,java] ---- -ClientConfig config = new ClientConfig(); -ClientConnectionStrategyConfig connectionStrategyConfig = config.getConnectionStrategyConfig(); -ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig(); -connectionRetryConfig.setInitialBackoffMillis(1000) - .setMaxBackoffMillis(60000) - .setMultiplier(2) - .setClusterConnectTimeoutMillis(50000) - .setJitter(0.2); - +ClientConfig clientConfig = new ClientConfig(); +clientConfig.getNetworkConfig().setConnectionTimeout(5000); ---- -The following are configuration element descriptions: +The default value is *5000* milliseconds. -* `initial-backoff-millis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. -Its default value is 1000 ms. -* `max-backoff-millis`: Specifies the upper limit for the backoff in milliseconds. -Its default value is 30000 ms. -* `multiplier`: Factor to multiply the backoff after a failed retry. -Its default value is 1.05. -* `cluster-connect-timeout-millis`: Timeout value in milliseconds for the client to give up -to connect to the current cluster. Its default value is `-1`, i.e., infinite. -For the default value, client will not stop trying to -connect to the target cluster (infinite timeout). If the failover client is used -with the default value of this configuration element, the failover client will try -to connect alternative clusters after 120000 ms (2 minutes). For any other value, -both the client and the failover client will use this as it is. -* `jitter`: Specifies by how much to randomize backoff periods. Its default value is 0. +=== Set a socket interceptor + +[blue]*Hazelcast {enterprise-product-name}* -A pseudo-code is as follows: +Any class implementing `com.hazelcast.nio.SocketInterceptor` is a socket interceptor. +The following code sample shows an example of how to set a socket interceptor: -[source,shell] +[source,java] ---- - begin_time = getCurrentTime() - current_backoff_millis = INITIAL_BACKOFF_MILLIS - while (TryConnect(connectionTimeout)) != SUCCESS) { - if (getCurrentTime() - begin_time >= CLUSTER_CONNECT_TIMEOUT_MILLIS) { - // Give up to connecting to the current cluster and switch to another if exists. - // For the default values, CLUSTER_CONNECT_TIMEOUT_MILLIS is infinite for the - // client and equal to the 120000 ms (2 minutes) for the failover client. - } - Sleep(current_backoff_millis + UniformRandom(-JITTER * current_backoff_millis, JITTER * current_backoff_millis)) - current_backoff = Min(current_backoff_millis * MULTIPLIER, MAX_BACKOFF_MILLIS) +public interface SocketInterceptor { + void init(Properties properties); + void onConnect(Socket connectedSocket) throws IOException; } ---- -Note that, `TryConnect` above tries to connect to any member that the client knows, -and for each connection we have a connection timeout; see the -<>. - - -[[blue-green-deployment-and-disaster-recovery]] - -=== Blue-Green Deployment -[[blue-green-mechanism]] -[blue]*Hazelcast {enterprise-product-name} Feature* - -Blue-green deployment refers to a client connection technique that reduces system downtime by deploying two mirrored clusters: blue (active) and green (idle). One of these clusters is running in production while the other is on standby. +The first method initializes the `SocketInterceptor` using the defined properties. +The second method informs when the socket is connected using the `onConnect` method. -Using the blue-green mechanism, clients can connect to another cluster automatically when they are blacklisted from their currently connected cluster. See the xref:{page-latest-supported-mc}@management-center:monitor-imdg:monitor-clients.adoc#changing-cluster-client-filtering[Hazelcast Management Center Reference Manual] for information about blacklisting the clients. +The following example shows how to create a SocketInterceptor and add it to the client configuration: -The client's behavior after this disconnection depends on its -<>. -The following are the options when you are using the blue-green mechanism, i.e., -you have alternative clusters for your clients to connect: +[source,java] +---- +SocketInterceptorConfig socketInterceptorConfig = clientConfig + .getNetworkConfig().getSocketInterceptorConfig(); -* If `reconnect-mode` is set to `ON`, the client changes the cluster and -blocks the invocations while doing so. -* If `reconnect-mode` is set to `ASYNC`, the client changes the cluster -in the background and throws `ClientOfflineException` while doing so. -* If `reconnect-mode` is set to `OFF`, the client does not change the cluster; it shuts down immediately. +MyClientSocketInterceptor myClientSocketInterceptor = new MyClientSocketInterceptor(); -NOTE: Here it could be the case that the whole cluster is restarted. -In this case, the members in the restarted cluster -reject the client's connection request, since the client is trying to connect to the old cluster. -So, the client needs to search for a new cluster, if available and -according to the blue-green configuration (see the following configuration related sections in this section). +socketInterceptorConfig.setEnabled(true); +socketInterceptorConfig.setImplementation(myClientSocketInterceptor); +---- -Consider the following notes for the blue-green mechanism (also valid for the disaster -recovery mechanism described in the next section): +// need comment for above? -* When a client disconnects from a cluster and -connects to a new one the `InitialMemberEvent` and `CLIENT_CHANGED_CLUSTER` events are fired. -* When switching clusters, the client reuses its UUID. -* The client's listener service re-registers its listeners on the new cluster; -the listener service opens a new connection to all members in the current -<> and registers the listeners for each connection. -* The client's Near Caches and Continuous Query Caches are cleared when -the client joins a new cluster successfully. -* If the new cluster's partition size is different, the client is rejected by the cluster. -The client is not able to connect to a cluster with different partition count. -* The state of any running job on the original cluster will be undefined. * Streaming jobs may continue running on the original cluster if the cluster is still alive and the switching happened due to a network problem. If you try to query the state of the job using the Job interface, you’ll get a `JobNotFoundException`. +If you want to configure the socket interceptor with a class name instead of an instance, +see the example below: -=== Disaster Recovery Mechanism +[source,java] +---- +SocketInterceptorConfig socketInterceptorConfig = clientConfig + .getNetworkConfig().getSocketInterceptorConfig(); -When one of your clusters is gone due to a failure, the connection between -your clients and members in that cluster is gone too. -When a client is disconnected because of a failure in the cluster, -it first tries to reconnect to the same cluster. +socketInterceptorConfig.setEnabled(true); -The client's behavior after this disconnection depends on its -<>, and it has the same options -that are described in the above section (Blue-Green Mechanism). +//These properties are provided to the interceptor during init +socketInterceptorConfig.setProperty("kerberos-host","kerb-host-name"); +socketInterceptorConfig.setProperty("kerberos-config-file","kerb.conf"); -If you have provided alternative clusters for your clients to connect, -the client tries to connect to those alternative clusters (depending on the `reconnect-mode`). +socketInterceptorConfig.setClassName(MyClientSocketInterceptor.class.getName()); +---- -When a failover starts, i.e., the client is disconnected and was configured -to connect to alternative clusters, the current <> is not considered; -the client cuts all the connections before attempting to connect to a new cluster and tries the clusters as configured. -See the below configuration related sections. +NOTE: For more information, see xref:security:socket-interceptor.adoc[Socket interceptor]. -=== Ordering of Clusters When Clients Try to Connect +=== Configure network socket options -The order of the clusters, that the client will try to connect -in a blue-green or disaster recovery scenario, is decided by -the order of these cluster declarations as given in the client configuration. +You can configure the network socket options using `SocketOptions`. It has the following methods: -Each time the client is disconnected from a cluster and it cannot connect back to the same one, -the configured list is iterated over. Count of these iterations before -the client decides to shut down is provided using the `try-count` configuration element. -See the following configuration related sections. +* `socketOptions.setKeepAlive(x)`: Enables/disables the *SO_KEEPALIVE* socket option. +The default value is `true`. +* `socketOptions.setTcpNoDelay(x)`: Enables/disables the *TCP_NODELAY* socket option. +The default value is `true`. +* `socketOptions.setReuseAddress(x)`: Enables/disables the *SO_REUSEADDR* socket option. +The default value is `true`. +* `socketOptions.setLingerSeconds(x)`: Enables/disables *SO_LINGER* with the specified linger time in seconds. +The default value is `3`. +* `socketOptions.setBufferSize(x)`: Sets the *SO_SNDBUF* and *SO_RCVBUF* options to the specified value in KB for this Socket. +The default value is `32`. -We didn't go over the configuration yet (see the following configuration related sections), -but for the sake of explaining the ordering, assume that you have -`client-config1`, `client-config2` and `client-config3` -in the given order as shown below (in your `hazelcast-client-failover` XML or YAML file). -This means you have three alternative clusters. -[tabs] -==== -XML:: -+ --- -[source,xml] +[source,java] ---- - - 4 - - client-config1.xml - client-config2.xml - client-config3.xml - - +SocketOptions socketOptions = clientConfig.getNetworkConfig().getSocketOptions(); +socketOptions.setBufferSize(32) + .setKeepAlive(true) + .setTcpNoDelay(true) + .setReuseAddress(true) + .setLingerSeconds(3); ---- --- -YAML:: -+ -[source,yaml] ----- -hazelcast-client-failover: - try-count: 4 - clients: - - client-config1.yaml - - client-config2.yaml - - client-config3.yaml ----- -==== +=== Enable client TLS +[blue]*Hazelcast {enterprise-product-name}* -And let's say the client is disconnected from the cluster -whose configuration is given by `client-config2.xml`. -Then, the client tries to connect to the next cluster in this list, -whose configuration is given by `client-config3.xml`. When the end of the list is reached, -which is so in this example, and the client could not connect to `client-config3`, -then `try-count` is incremented and the client continues to try to connect starting with `client-config1`. +You can use TLS to secure the connection between the client and the members. +If you want TLS enabled for the client-cluster connection, you should set `SSLConfig`. +For more information, see xref:security:tls-ssl.adoc[TLS]. -This iteration continues until the client connects to a cluster or `try-count` is reached to the configured value. -When the iteration reaches this value and the client still could not connect to a cluster, -it shuts down. Note that, if `try-count` was set to `1` in the above example, -and the client could not connect to `client-config3`, it would shut down since -it already tried once to connect to an alternative cluster. +NOTE: SSL (Secure Sockets Layer) is the predecessor protocol to TLS (Transport Layer Security). +Both protocols encrypt and secure data transmitted over networks but SSL is now considered outdated and has been replaced by TLS for improved security. +Hazelcast code still refers to `ssl` in places for backward compatibility but consider these references to also include TLS. -The following sections describe how you can configure the Java client for -blue-green and disaster recovery scenarios. +Keys and certificates in `keyStores` are used to prove identity to the other side of the connection, and `trustStores` are used to +specify trusted parties (from which the connection should be accepted). +Clients only need to have their `keyStores` specified when xref:security:tls-ssl.adoc#mutual-authentication[TLS Mutual Authentication] is +required by members. -=== Configuring Using CNAME +For a programmatic example, see this xref:java#programmatic-configuration-5[code example]. -Using CNAME, you can change the hostname resolutions and use them dynamically. -Let's describe the configuration with examples. +=== Configure Hazelcast {hazelcast-cloud} + +NOTE: This section is only applicable to the {java-client}. -Assume that you have two clusters, Cluster A and Cluster B, and two Java clients. +You can connect the {java-client} to a {hazelcast-cloud} Standard cluster which is hosted on link:{url-cloud-signup}[{hazelcast-cloud}]. +For this, you need to enable {hazelcast-cloud} and specify the cluster's discovery token provided while creating the cluster; this allows the cluster to discover your clients. +See the following example configurations: -First configure the Cluster A members as shown below: +==== Declarative configuration [tabs] ==== @@ -1922,18 +1675,16 @@ XML:: -- [source,xml] ---- - + ... - - - clusterA.member1 - clusterA.member2 - - + + + YOUR_TOKEN + ... - + ---- -- @@ -1941,103 +1692,80 @@ YAML:: + [source,yaml] ---- -hazelcast: +hazelcast-client: network: - join: - tcp-ip: - enabled: true - members: clusterA.member1,clusterA.member2 + ssl: + enabled: true + hazelcast-cloud: + enabled: true + discovery-token: YOUR_TOKEN ---- ==== -Then, configure the Cluster B members as shown below. +==== Programmatic configuration -[tabs] -==== -XML:: -+ --- -[source,xml] +[source,java] ---- - - ... - - - - clusterB.member1 - clusterB.member2 - - - - ... - +ClientConfig config = new ClientConfig(); +ClientNetworkConfig networkConfig = config.getNetworkConfig(); +networkConfig.getCloudConfig().setDiscoveryToken("TOKEN").setEnabled(true); +networkConfig.setSSLConfig(new SSLConfig().setEnabled(true)); +HazelcastInstance client = HazelcastClient.newHazelcastClient(config); ---- --- -YAML:: -+ -[source,yaml] ----- -hazelcast: - network: - join: - tcp-ip: - enabled: true - members: clusterB.member1,clusterB.member2 ----- +{hazelcast-cloud} is disabled for the Java client, by default (`enabled` attribute is `false`). + +See xref:cloud:ROOT:overview.adoc[Hazelcast {hazelcast-cloud}] for more information about {hazelcast-cloud}. + +NOTE: Because this is a REST based discovery, you need to enable the REST listener service. +See the xref:clients:rest.adoc#using-the-rest-endpoint-groups[REST Endpoint Groups section] on how to enable REST endpoints. + +include::partial$rest-deprecation.adoc[] + +[NOTE] ==== +For security reasons, we recommend you enable certificate revocation status JRE-wide. +You need to set the following Java system properties to `true`: -Configure the two clients as shown below. +* `com.sun.net.ssl.checkRevocation` +* `com.sun.security.enableCRLDP` -[tabs] -==== -Client 1 XML:: -+ --- -[source,xml] ----- - - ... - cluster-a - - -
production1.myproject
-
production2.myproject
-
-
- ... -
----- --- +And you need to set the Java security property as follows: -YAML:: -+ -[source,yaml] ----- -hazelcast-client: - cluster-name: cluster-a - network: - cluster-members: - - production1.myproject - - production2.myproject ----- +`Security.setProperty("ocsp.enable", "true")` + +You can find more details on the related security topics from the Oracle Docs on +http://docs.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CERTPATH[JSSE Ref Guide] and +http://docs.oracle.com/javase/6/docs/technotes/guides/security/certpath/CertPathProgGuide.html#AppC[Cert Path Prog Guide]. ==== +=== Configure client for AWS + +The example declarative and programmatic configurations below show +how to configure a Java client for connecting to a Hazelcast cluster in AWS (Amazon Web Services). + +==== Declarative Configuration + [tabs] ==== -Client 2 XML:: +XML:: + -- [source,xml] ---- ... - cluster-b - -
production1.myproject
-
production2.myproject
-
+ + true + my-access-key + my-secret-key + us-west-1 + ec2.amazonaws.com + hazelcast-sg + type + hz-members +
...
@@ -2049,212 +1777,131 @@ YAML:: [source,yaml] ---- hazelcast-client: - cluster-name: cluster-b network: - cluster-members: - - production1.myproject - - production2.myproject + aws: + enabled: true + use-public-ip: true + access-key: my-access-key + secret-key: my-secret-key + region: us-west-1 + host-header: ec2.amazonaws.com + security-group-name: hazelcast-sg + tag-key: type + tag-value: hz-members ---- ==== -Assuming that the client configuration file names of the above example clients are -`hazelcast-client-c1.xml/yaml` and `hazelcast-client-c2.xml/yaml`, you should configure the -client failover for a blue-green deployment scenario as follows: - -[tabs] -==== -XML:: -+ --- -[source,xml] ----- - - 4 - - hazelcast-client-c1.xml - hazelcast-client-c2.xml - - ----- --- +==== Programmatic Configuration -YAML:: -+ -[source,yaml] +[source,java] ---- -hazelcast-client-failover: - try-count: 4 - clients: - - hazelcast-client-c1.yaml - - hazelcast-client-c2.yaml +include::ROOT:example$/clients/ExampleClientAwsConfig.java[tag=clientaws] ---- -==== -NOTE: You can find the complete Hazelcast client failover -example configuration file (`hazelcast-client-failover-full-example`) -both in XML and YAML formats including the descriptions of elements and attributes, -in the `/bin` directory of your Hazelcast download directory. +For more information on AWS configuration elements (except `use-public-ip`), see xref:clusters:network-configuration.adoc#aws-element[AWS Element section on network configuration]. -You should also configure your clients to forget DNS lookups using the -https://docs.oracle.com/javase/7/docs/technotes/guides/net/properties.html[networkaddress.cache.ttl^] JVM parameter. - -Configure the addresses in your clients' configuration to resolve to hostnames of -Cluster A via CNAME so that the clients will connect to Cluster A when it starts: +If the `use-public-ip` element is set to `true`, the private addresses of cluster members +are always converted to public addresses. Also, the client uses public addresses to +connect to the members. In order to use private addresses, set the `use-public-ip` parameter to `false`. -`production1.myproject` → `clusterA.member1` +NOTE: When connecting outside from AWS, if you set the `use-public-ip` parameter to `false` then +the client will not be able to reach the members. -`production2.myproject` → `clusterA.member2` +== Use client services -When you want the clients to switch to the other cluster, change the mapping as follows: - -`production1.myproject` → `clusterB.member1` +Hazelcast provides the following client services. -`production2.myproject` → `clusterB.member2` +=== Use distributed executor service -Wait for the time you configured using the `networkaddress.cache.ttl` JVM parameter for -the client JVM to forget the old mapping. +The distributed executor service is for distributed computing. +It can be used to execute tasks on the cluster on a designated partition or on all the partitions. +It can also be used to process entries. For more information, see xref:computing:executor-service.adoc[]. -Blacklist the clients in Cluster A using the Hazelcast Management Center. +``` +IExecutorService executorService = client.getExecutorService("default"); +``` -=== Configuring Without CNAME +After getting an instance of `IExecutorService`, you can use the instance as +the interface with the one provided on the server side. See +xref:computing:distributed-computing.adoc[] for detailed usage. -Let's first give example configurations and describe the configuration elements. +=== Listen to client connections -**Declarative Configuration:** +If you need to track clients and want to listen to their connection events, +you can use the `clientConnected()` and `clientDisconnected()` methods of the `ClientService` class. +This class must be run on the **member** side. The following code shows an example of how to do this: -[tabs] -==== -XML:: -+ --- -[source,xml] +[source,java] ---- - - 4 - - hazelcast-client-c1.xml - hazelcast-client-c2.xml - - +include::ROOT:example$/clients/ListeningClients.java[tag=lc] ---- --- -YAML:: -+ -[source,yaml] +=== Find the partition of a key + +You use the partition service to find the partition of a key. +It returns all partitions. See the example code below: + +[source,java] ---- -hazelcast-client-failover: - try-count: 4 - clients: - - hazelcast-client-c1.yaml - - hazelcast-client-c2.yaml +PartitionService partitionService = client.getPartitionService(); + +//partition of a key +Partition partition = partitionService.getPartition(key); + +//all partitions +Set partitions = partitionService.getPartitions(); ---- -==== -**Programmatic Configuration:** +=== Handling Lifecycle + +Lifecycle handling performs: + +* checking if the client is running +* shutting down the client gracefully +* terminating the client ungracefully (forced shutdown) +* adding/removing lifecycle listeners. [source,java] ---- -ClientConfig clientConfig = new ClientConfig(); -clientConfig.setClusterName("cluster-a"); -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -networkConfig.addAddress("10.216.1.18", "10.216.1.19"); +LifecycleService lifecycleService = client.getLifecycleService(); -ClientConfig clientConfig2 = new ClientConfig(); -clientConfig2.setClusterName("cluster-b"); -ClientNetworkConfig networkConfig2 = clientConfig2.getNetworkConfig(); -networkConfig2.addAddress( "10.214.2.10", "10.214.2.11"); +if(lifecycleService.isRunning()){ + //it is running +} -ClientFailoverConfig clientFailoverConfig = new ClientFailoverConfig(); -clientFailoverConfig.addClientConfig(clientConfig).addClientConfig(clientConfig2).setTryCount(10) -HazelcastInstance client = HazelcastClient.newHazelcastFailoverClient(clientFailoverConfig); +//shutdown client gracefully +lifecycleService.shutdown(); ---- -The following are the descriptions for the configuration elements: +// == Client messaging +== Build data pipeline -* `try-count`: Count of connection retries by the client to the alternative clusters. -When this value is reached and the client still could not connect to a cluster, the client -shuts down. Note that this value applies to the alternative clusters whose configurations are provided -with the `client` element. For the above example, two alternative clusters are given -with the `try-count` set as `4`. This means the number of connection attempts is -4 x 2 = 8. -* `client`: Path to the client configuration that corresponds to an alternative cluster that the client will try to connect. - -The client configurations must be exactly the same except the following configuration options: +To build a data pipeline: -* `SecurityConfig` -* `NetworkConfig.Addresses` -* `NetworkConfig.SocketInterceptorConfig` -* `NetworkConfig.SSLConfig` -* `NetworkConfig.AwsConfig` -* `NetworkConfig.GcpConfig` -* `NetworkConfig.AzureConfig` -* `NetworkConfig.KubernetesConfig` -* `NetworkConfig.EurekaConfig` -* `NetworkConfig.CloudConfig` -* `NetworkConfig.DiscoveryConfig` - -You can also configure it within the Spring context, as shown below: - -[source,xml] +[source,java] ---- - - - - - - 127.0.0.1:5700 - 127.0.0.1:5701 - - - - - - - 127.0.0.1:5702 - 127.0.0.1:5703 - - - - - +Pipeline EvenNumberStream = Pipeline.create(); +EvenNumberStream.readFrom(TestSources.itemStream(10)) + .withoutTimestamps() + .filter(event -> event.sequence() % 2 == 0) + .setName("filter out odd numbers") + .writeTo(Sinks.logger()); +client.getJet().newJob(EvenNumberStream); ---- -== Java Client Failure Detectors - -The client failure detectors are responsible to determine if a member in the cluster is unreachable or crashed. -The most important problem in the failure detection is to distinguish -whether a member is still alive but slow, or has crashed. -But according to the famous http://dl.acm.org/citation.cfm?doid=3149.214121[FLP result^], -it is impossible to distinguish a crashed member from a slow one in an asynchronous system. -A workaround to this limitation is to use unreliable failure detectors. -An unreliable failure detector allows a member to suspect that others have failed, -usually based on liveness criteria but it can make mistakes to a certain degree. - -Hazelcast Java client has two built-in failure detectors: Deadline Failure Detector and -Ping Failure Detector. These client failure detectors work independently of -the member failure detectors, e.g., you do not need to enable the member failure detectors -to benefit from the client ones. - -=== Client Deadline Failure Detector - -_Deadline Failure Detector_ uses an absolute timeout for missing/lost heartbeats. -After timeout, a member is considered as crashed/unavailable and marked as suspected. +For details about data pipelines, see xref:pipelines:overview.adoc[]. -_Deadline Failure Detector_ has two configuration properties: +=== Define client labels +// decide placement -* `hazelcast.client.heartbeat.interval`: This is the interval at which client sends -heartbeat messages to members. -* `hazelcast.client.heartbeat.timeout`: This is the timeout which defines when -a cluster member is suspected, because it has not sent any response back to client requests. +You can define labels in your Java client, similar to the way labels are used for members. +With client labels you can assign special roles for your clients and +use these roles to perform actions specific to those client connections. For more information on labels, see xref:management:cluster-utilities.adoc[Cluster Utilities]. -NOTE: The value of `hazelcast.client.heartbeat.interval` should be smaller than -that of `hazelcast.client.heartbeat.timeout`. In addition, the value of system property -xref:ROOT:system-properties.adoc#client-max-no[`hazelcast.client.max.no.heartbeat.seconds`], which is set on the member side, -should be larger than that of `hazelcast.client.heartbeat.interval`. +You can also group your clients using labels. You can use Hazelcast Management Center to blocklist these client groups to prevent them connecting to a cluster. For more information, see xref:{page-latest-supported-mc}@management-center:clusters:client-filtering.adoc[]. -The following is a declarative example showing how you can configure the Deadline Failure Detector -for your client (in the client's configuration XML file, e.g., `hazelcast-client.xml`): +The following declarative example shows how to define client using the `client-labels` +configuration element: [tabs] ==== @@ -2265,11 +1912,12 @@ XML:: ---- ... - - 60000 - 5000 - - ... + barClient + + + + + .... ---- -- @@ -2279,169 +1927,55 @@ YAML:: [source,yaml] ---- hazelcast-client: - properties - hazelcast.client.heartbeat.timeout: 60000 - hazelcast.client.heartbeat.interval: 5000 + instance-name: barClient + client-labels: + - user + - bar ---- ==== -And, the following is the equivalent programmatic configuration: +The following programmatic example shows how to define client using the `client-labels` +configuration element: [source,java] ---- -ClientConfig config = ...; -config.setProperty("hazelcast.client.heartbeat.timeout", "60000"); -config.setProperty("hazelcast.client.heartbeat.interval", "5000"); -[...] ----- - -=== Client Ping Failure Detector - -In addition to the Deadline Failure Detector, the Ping Failure Detector may be configured on your client. -Please note that this detector is disabled by default. The Ping Failure Detector -operates at Layer 3 of the OSI protocol and provides much quicker and more deterministic -detection of hardware and other lower level events. -When the JVM process has enough permissions to create RAW sockets, the implementation -chooses to rely on ICMP Echo requests. This is preferred. - -If there are not enough permissions, it can be configured to fallback on attempting -a TCP Echo on port 7. In the latter case, both a successful connection or an explicit rejection -is treated as "Host is Reachable". Or, it can be forced to use only RAW sockets. -This is not preferred as each call creates a heavyweight socket and moreover the Echo service is typically disabled. - -For the Ping Failure Detector to rely **only** on the ICMP Echo requests, -the following criteria need to be met: - -* Supported OS: as of Java 1.8 only Linux/Unix environments are supported. -* The Java executable must have the `cap_net_raw` capability. -* The file `ld.conf` must be edited to overcome the rejection by the dynamic -linker when loading libs from untrusted paths. -* ICMP Echo Requests must not be blocked by the receiving hosts. - -The details of these requirements are explained in the -xref:clusters:failure-detector-configuration.adoc#requirements-and-linuxunix-configuration[Requirements section] of -Hazelcast members' xref:clusters:failure-detector-configuration.adoc#ping-failure-detector[Ping Failure Detector]. - -If any of the above criteria isn't met, then `isReachable` will always -fall back on TCP Echo attempts on port 7. - -An example declarative configuration to use the Ping Failure Detector is -as follows (in the client's configuration XML file, e.g., `hazelcast-client.xml`): +ClientConfig clientConfig = new ClientConfig(); +clientConfig.setInstanceName("ExampleClientName"); +clientConfig.addLabel("user"); +clientConfig.addLabel("bar"); -[tabs] -==== -XML:: -+ --- -[source,xml] ----- - - ... - - - 1000 - 1000 - 255 - false - 2 - - - ... - +HazelcastClient.newHazelcastClient(clientConfig); ---- --- -YAML:: -+ -[source,yaml] ----- -hazelcast-client: - network: - icmp-ping: - enabled: false - timeout-milliseconds: 1000 - interval-milliseconds: 1000 - ttl: 255 - echo-fail-fast-on-startup: false - max-attempts: 2 ----- -==== +For an working code sample using client labels, see the https://github.com/hazelcast/hazelcast-code-samples/tree/master/clients/client-labels[Client labels code sample]. -And, the equivalent programmatic configuration: +== Query with SQL +To query a map using SQL: [source,java] ---- -ClientConfig config = ...; - -ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); -ClientIcmpPingConfig clientIcmpPingConfig = networkConfig.getClientIcmpPingConfig(); -clientIcmpPingConfig.setIntervalMilliseconds(1000) - .setTimeoutMilliseconds(1000) - .setTtl(255) - .setMaxAttempts(2) - .setEchoFailFastOnStartup(false) - .setEnabled(true); +String query = + "SELECT * FROM customers csv_likes"; +try (SqlResult result = client.getSql().execute(query)) { + for (SqlRow row : result) { + System.out.println("" + row.getObject(0)); + } +} ---- -The following are the descriptions of configuration elements and attributes: - -* `enabled`: Enables the legacy ICMP detection mode, works cooperatively with -the existing failure detector and only kicks-in after a pre-defined period -has passed with no heartbeats from a member. Its default value is `false`. -* `timeout-milliseconds`: Number of milliseconds until a ping attempt is -considered failed if there was no reply. Its default value is *1000* milliseconds. -* `max-attempts`: Maximum number of ping attempts before the member gets -suspected by the detector. Its default value is *3*. -* `interval-milliseconds`: Interval, in milliseconds, between each ping attempt. -1000ms (1 sec) is also the minimum interval allowed. Its default value is *1000* milliseconds. -* `ttl`: Maximum number of hops the packets should go through. -Its default value is *255*. You can set to *0* to use your system's default TTL. - -In the above example configuration, the Ping Failure Detector attempts 2 pings, -one every second, and waits up to 1 second for each to complete. -If there is no successful ping after 2 seconds, the member gets suspected. - -To enforce the xref:clusters:failure-detector-configuration.adoc#requirements-and-linuxunix-configuration[Requirements], -the property `echo-fail-fast-on-startup` can also be set to `true`, in which case Hazelcast fails to start if any of the requirements -isn't met. - -Unlike the Hazelcast members, Ping Failure Detector works always in parallel with -Deadline Failure Detector on the clients. -Below is a summary table of all possible configuration combinations of the Ping Failure Detector. - -|=== -| ICMP| Fail-Fast| Description| Linux| Windows | macOS - -| true -| false -| Parallel ping detector, works in parallel with the configured failure detector. -Checks periodically if members are live (OSI Layer 3) and suspects them immediately, -regardless of the other detectors. -| Supported ICMP Echo if available - Falls back on TCP Echo on port 7 -| Supported TCP Echo on port 7 -| Supported ICMP Echo if available - Falls back on TCP Echo on port 7 +For details about querying with SQL, see xref:query:sql-overview.adoc[]. -| true -| true -| Parallel ping detector, works in parallel with the configured failure detector. -Checks periodically if members are live (OSI Layer 3) and suspects them immediately, -regardless of the other detectors. -| Supported - Requires OS Configuration Enforcing ICMP Echo if available - No start up if not available -| Not Supported -| Not Supported - Requires root privileges -|=== +// == Advanced configuration +// change level -== Client System Properties +== Client system properties -There are some advanced client configuration properties to tune some aspects of Hazelcast Client. -You can set them as property name and value pairs through declarative configuration, -programmatic configuration, or JVM system property. See the xref:ROOT:system-properties.adoc[System Properties appendix] -to learn how to set these properties. +There are some advanced client configuration properties that help you tune the {java-client}. +You can set them as property name and value pairs through either declarative or programmatic configuration, or with a JVM system property. For more information on system properties in general, including how to set them, see xref:ROOT:system-properties.adoc[System Properties]. -NOTE: When you want to reconfigure a system property, you need to restart the clients for -which the property is modified. +// big overlap with System properties, section needs review and edit -The table below lists the client configuration properties with their descriptions. +NOTE: You need to restart clients after modifying system properties. [cols="4a,1,1,4a"] .Client System Properties @@ -2451,7 +1985,9 @@ The table below lists the client configuration properties with their description |`hazelcast.client.cloud.discovery.token` | |long -|Token to use when discovering the cluster via {hazelcast-cloud}. +|Token to use when discovering the cluster via {hazelcast-cloud}. + +NOTE: Not supported by {java-client-new}. |`hazelcast.client.concurrent.window.ms` |100 @@ -2464,6 +2000,8 @@ Setting it too high effectively disables the optimization because once concurren it will keep that way. Setting it too low could lead to suboptimal performance because the system will try to use write-through and other optimizations even though the system is concurrent. +NOTE: Not supported by {java-client-new}. + |`hazelcast.discovery.enabled` |false |bool @@ -2629,6 +2167,8 @@ increased performance and reduced memory usage. to the same member when this property is `true`. When it is set to `false`, the client tries to connect to the members in the given order. +NOTE: Not supported by {java-client-new}. + |`hazelcast.client.connectivity.logging.delay.seconds` |10 |int @@ -2656,30 +2196,269 @@ The value set here is used as `hazelcast.client.metrics.collection.frequency`. If both are configured, this one is ignored. |=== -== Using High-Density Memory Store with Java Client +== Advanced configuration +=== Declarative configuration -[navy]*Hazelcast {enterprise-product-name}* +You can configure the client declaratively (XML), programmatically (API), or +using client system properties. -If you have [navy]*Hazelcast {enterprise-product-name}*, your Hazelcast Java client's Near Cache -can benefit from the High-Density Memory Store. +For declarative configuration, the client checks the following places for the client configuration file: -Let's recall the Java client's Near Cache configuration -(see the <>) -**without** High-Density Memory Store: +* **System property**: The client first checks if the `hazelcast.client.config` system property is +set to a file path e.g. `-Dhazelcast.client.config=C:/myhazelcast.xml`. +* **Classpath**: If the configuration file is not set as a system property, the client checks the classpath for the `hazelcast-client.xml` file. -[source,xml] ----- - - ... - - - 0 - 0 - true - OBJECT - - ... - +If the client does not find a configuration file, it starts with the default configuration +(`hazelcast-client-default.xml`) from the `hazelcast.jar` library. + +TIP: Before changing the configuration file, try using the default configuration as a first step. The default configuration should be fine for most environments but you can always consider a custom configuration if it doesn't fit your requirements. + +If you want to define your own configuration file to create a `Config` object, you can do this using: + +* `Config cfg = new XmlClientConfigBuilder(xmlFileName).build();` +* `Config cfg = new XmlClientConfigBuilder(inputStream).build();` + +// are these just examples? is this section complete? Not programmatic config removed as covered previously +=== Client load balancer +`LoadBalancer` enables you to send operations to one of a number of endpoints (members). +Its main purpose is to determine the next `member`, if queried. You can use the `com.hazelcast.client.LoadBalancer` interface to apply different load balancing policies. + +For <>, the behaviour is as follows: + +* If set to `ALL_MEMBERS` only the non key-based operations are routed to the endpoint returned by the `LoadBalancer` +* If set to `SINGLE_MEMBER`, `LoadBalancer` is ignored +* If set to `MULTI_MEMBER`, best effort is made to route operations to the required member. If this can't be done for any reason, operations are routed as defined by the `LoadBalancer` + +NOTE: If you are using smart or unisocket client operation modes, see https://docs.hazelcast.com/hazelcast/5.4/clients/java#configuring-client-load-balancer[previous documentation on this topic]. + +For example configurations, see the following code samples: + +==== Declarative configuration + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + ... + + ... + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + load-balancer: + type: random +---- +==== + +==== Programmatic configuration + +[source,java] +---- +ClientConfig clientConfig = new ClientConfig(); +clientConfig.setLoadBalancer(yourLoadBalancer); +---- + +[[client-serialization-configuration]] +=== Configure serialization + +For client side serialization, use the Hazelcast configuration. +For more information, see xref:serialization:serialization.adoc[Serialization]. + +[[configuring-reliable-topic-at-client-side]] +=== Configure reliable topic on client side + +Normally when a client uses a Hazelcast data structure, +that structure is configured on the member side and the client uses that configuration. +For the Reliable Topic structure, which is backed by Ringbuffer, you need to configure it on the client side instead. The class used for this configuration is `ClientReliableTopicConfig`. + +Here is an example programmatic configuration: + +[source,java] +---- +include::ROOT:example$/clients/ExampleRTClient.java[tag=rtclient] +---- + +When you create a Reliable Topic structure on your client, a Ringbuffer +(with the same name as the Reliable Topic) is automatically created on the member side, +with the default configuration. See the xref:data-structures:ringbuffer.adoc[Configuring Ringbuffer section] for the defaults. +You can edit that configuration according to your needs. + +You can also declaratively configure a Reliable Topic structure on the client side, as the following declarative code example shows: + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + ... + + 10000000 + 5 + + + BLOCK + 10 + + ... + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + ringbuffer: + default: + capacity: 10000000 + time-to-live-seconds: 5 + reliable-topic: + default: + topic-overload-policy: BLOCK + read-batch-size: 10 +---- +==== + + +=== Configure client connection retry + +When a client is disconnected from the cluster, or is trying to connect to a cluster +for the first time, it searches for new connections. You can configure the frequency +of the connection attempts and client shutdown behavior using +`ConnectionRetryConfig` (programmatic) or `connection-retry` (declarative). + +==== Declarative Configuration + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + ... + + + 1000 + 60000 + 2 + 50000 + 0.2 + + + ... + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + connection-strategy: + async-start: false + reconnect-mode: ON + connection-retry: + initial-backoff-millis: 1000 + max-backoff-millis: 60000 + multiplier: 2 + cluster-connect-timeout-millis: 50000 + jitter: 0.2 +---- +==== + +==== Programmatic Configuration + +[source,java] +---- +ClientConfig config = new ClientConfig(); +ClientConnectionStrategyConfig connectionStrategyConfig = config.getConnectionStrategyConfig(); +ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig(); +connectionRetryConfig.setInitialBackoffMillis(1000) + .setMaxBackoffMillis(60000) + .setMultiplier(2) + .setClusterConnectTimeoutMillis(50000) + .setJitter(0.2); +---- + +The following are configuration element descriptions: + +* `initial-backoff-millis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. +The default value is 1000 ms. +* `max-backoff-millis`: Specifies the upper limit for the backoff in milliseconds. +The default value is 30000 ms. +* `multiplier`: Factor to multiply the backoff after a failed retry. +The default value is 1.05. +* `cluster-connect-timeout-millis`: Timeout value in milliseconds for the client to give up +connecting to the current cluster. The default value is `-1`, i.e. infinite. +For the default value, client will not stop trying to +connect to the target cluster (infinite timeout). If the failover client is used +with the default value of this configuration element, the failover client will try +to connect alternative clusters after 120000 ms (2 minutes). For any other value, +both the client and the failover client will use this as it is. +* `jitter`: Specifies how much to randomize backoff periods. The default value is 0. + +The pseudo-code is as follows: + +[source,shell] +---- + begin_time = getCurrentTime() + current_backoff_millis = INITIAL_BACKOFF_MILLIS + while (TryConnect(connectionTimeout)) != SUCCESS) { + if (getCurrentTime() - begin_time >= CLUSTER_CONNECT_TIMEOUT_MILLIS) { + // Give up to connecting to the current cluster and switch to another if exists. + // For the default values, CLUSTER_CONNECT_TIMEOUT_MILLIS is infinite for the + // client and equal to the 120000 ms (2 minutes) for the failover client. + } + Sleep(current_backoff_millis + UniformRandom(-JITTER * current_backoff_millis, JITTER * current_backoff_millis)) + current_backoff = Min(current_backoff_millis * MULTIPLIER, MAX_BACKOFF_MILLIS) + } +---- + +`TryConnect` tries to connect to any member that the client knows, +and the connection timeout applies for each connection. For more information, see +<>. + + +=== Use High-Density Memory Store with Java Client +[navy]*Hazelcast {enterprise-product-name}* + +If you have [navy]*Hazelcast {enterprise-product-name}*, the client's Near Cache +can benefit from the High-Density Memory Store. + +Let's consider the Java client's Near Cache configuration +(see the <>) +**without** High-Density Memory Store: + +[source,xml] +---- + + ... + + + 0 + 0 + true + OBJECT + + ... + ---- You can configure this Near Cache to use Hazelcast's High-Density Memory Store @@ -2709,10 +2488,10 @@ Available values are as follows: ** USED_NATIVE_MEMORY_PERCENTAGE: Maximum used native memory percentage. ** FREE_NATIVE_MEMORY_SIZE: Minimum free native memory size to trigger cleanup. ** FREE_NATIVE_MEMORY_PERCENTAGE: Minimum free native memory percentage to trigger cleanup. -* `eviction-policy`: Eviction policy configuration. Its default values is NONE. +* `eviction-policy`: Eviction policy configuration. The default value is NONE. Available values are as follows: ** NONE: No items are evicted and the `size` property is ignored. -You still can combine it with time-to-live-seconds. +You can still combine it with time-to-live-seconds. ** LRU: Least Recently Used. ** LFU: Least Frequently Used. @@ -2721,3 +2500,333 @@ usage for your client, using the `` element in the client's confi See the xref:storage:high-density-memory.adoc[High-Density Memory Store section] for more information about Hazelcast's High-Density Memory Store feature. + + + + + +[[blue-green-deployment-and-disaster-recovery]] + +=== Blue-Green Deployment +[[blue-green-mechanism]] +[blue]*Hazelcast {enterprise-product-name}* + +Blue-green deployment refers to a client connection technique that reduces system downtime by deploying two mirrored clusters: blue (active) and green (idle). One of these clusters is running in production while the other is on standby. + +Using the blue-green mechanism, clients can connect to another cluster automatically when they are blacklisted from their currently connected cluster. See the xref:{page-latest-supported-mc}@management-center:monitor-imdg:monitor-clients.adoc#changing-cluster-client-filtering[Hazelcast Management Center Reference Manual] for information about blacklisting the clients. + +The client's behavior after this disconnection depends on its +<>. +The following are the options when you are using the blue-green mechanism, i.e., +you have alternative clusters for your clients to connect: + +* If `reconnect-mode` is set to `ON`, the client changes the cluster and +blocks the invocations while doing so. +* If `reconnect-mode` is set to `ASYNC`, the client changes the cluster +in the background and throws `ClientOfflineException` while doing so. +* If `reconnect-mode` is set to `OFF`, the client does not change the cluster; it shuts down immediately. + +NOTE: Here it could be the case that the whole cluster is restarted. +In this case, the members in the restarted cluster +reject the client's connection request, since the client is trying to connect to the old cluster. +So, the client needs to search for a new cluster, if available and +according to the blue-green configuration (see the following configuration related sections in this section). + +Consider the following notes for the blue-green mechanism (also valid for the disaster +recovery mechanism described in the next section): + +* When a client disconnects from a cluster and +connects to a new one the `InitialMemberEvent` and `CLIENT_CHANGED_CLUSTER` events are fired. +* When switching clusters, the client reuses its UUID. +* The client's listener service re-registers its listeners on the new cluster; +the listener service opens a new connection to all members in the current +<> and registers the listeners for each connection. +* The client's Near Caches and Continuous Query Caches are cleared when +the client joins a new cluster successfully. +* If the new cluster's partition size is different, the client is rejected by the cluster. +The client is not able to connect to a cluster with different partition count. +* The state of any running job on the original cluster will be undefined. * Streaming jobs may continue running on the original cluster if the cluster is still alive and the switching happened due to a network problem. If you try to query the state of the job using the Job interface, you’ll get a `JobNotFoundException`. + +=== Disaster Recovery Mechanism + +When one of your clusters is gone due to a failure, the connection between +your clients and members in that cluster is gone too. +When a client is disconnected because of a failure in the cluster, +it first tries to reconnect to the same cluster. + +The client's behavior after this disconnection depends on its +<>, and it has the same options +that are described in the above section (Blue-Green Mechanism). + +If you have provided alternative clusters for your clients to connect, +the client tries to connect to those alternative clusters (depending on the `reconnect-mode`). + +When a failover starts, i.e., the client is disconnected and was configured +to connect to alternative clusters, the current <> is not considered; +the client cuts all the connections before attempting to connect to a new cluster and tries the clusters as configured. +See the below configuration related sections. + +=== Reconnect order for clusters + +The order of clusters that the client will try to reconnect +in a blue-green or disaster recovery scenario is decided by +the order of the cluster declarations defined in the client configuration. + +Every time the client disconnects from a cluster and cannot connect back to the same cluster, +this list is iterated over. The `try-count` configuration element limits the number of iterations before the client shuts down. + +As an example, assume that your `hazelcast-client-failover` XML or YAML file defines the following order: + +* `client-config1` +* `client-config2` +* `client-config3` + +Which means you have three alternative clusters. + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + 4 + + client-config1.xml + client-config2.xml + client-config3.xml + + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client-failover: + try-count: 4 + clients: + - client-config1.yaml + - client-config2.yaml + - client-config3.yaml +---- +==== + +If the client is disconnected from the cluster configured in `client-config2`, the cluster will try to connect to the next cluster in the list, which is `client-config3`. +If the client fails to connect to this cluster then the `try-count` is incremented, and the client tries to connect to the next alternative cluster, which in this case is `client-config1` (because it has reached the end of the list and returns to the start). This iteration continues until either the client successfully connects to a cluster or the `try-count` limit is reached. If the `try-count` is reached without connecting, the client shuts down. + +// check try-count logic + +== Failures +=== Client deadline failure detector + +_Deadline Failure Detector_ uses an absolute timeout for missing/lost heartbeats. +After timeout, a member is considered as crashed/unavailable and marked as suspected. + +_Deadline Failure Detector_ has two configuration properties: + +* `hazelcast.client.heartbeat.interval`: This is the interval at which client sends +heartbeat messages to members. +* `hazelcast.client.heartbeat.timeout`: This is the timeout which defines when +a cluster member is suspected, because it has not sent any response back to client requests. + +NOTE: The value of `hazelcast.client.heartbeat.interval` should be smaller than +that of `hazelcast.client.heartbeat.timeout`. In addition, the value of system property +xref:ROOT:system-properties.adoc#client-max-no[`hazelcast.client.max.no.heartbeat.seconds`], which is set on the member side, +should be larger than that of `hazelcast.client.heartbeat.interval`. + +The following is a declarative example showing how you can configure the Deadline Failure Detector +for your client (in the client's configuration XML file, e.g., `hazelcast-client.xml`): + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + ... + + 60000 + 5000 + + ... + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + properties + hazelcast.client.heartbeat.timeout: 60000 + hazelcast.client.heartbeat.interval: 5000 +---- +==== + +And, the following is the equivalent programmatic configuration: + +[source,java] +---- +ClientConfig config = ...; +config.setProperty("hazelcast.client.heartbeat.timeout", "60000"); +config.setProperty("hazelcast.client.heartbeat.interval", "5000"); +[...] +---- + +=== Client ping failure detector + +In addition to the Deadline Failure Detector, the Ping Failure Detector may be configured on your client. +Please note that this detector is disabled by default. The Ping Failure Detector +operates at Layer 3 of the OSI protocol and provides much quicker and more deterministic +detection of hardware and other lower level events. +When the JVM process has enough permissions to create RAW sockets, the implementation +chooses to rely on ICMP Echo requests. This is preferred. + +If there are not enough permissions, it can be configured to fallback on attempting +a TCP Echo on port 7. In the latter case, both a successful connection or an explicit rejection +is treated as "Host is Reachable". Or, it can be forced to use only RAW sockets. +This is not preferred as each call creates a heavyweight socket and moreover the Echo service is typically disabled. + +For the Ping Failure Detector to rely **only** on the ICMP Echo requests, +the following criteria need to be met: + +* Supported OS: as of Java 1.8 only Linux/Unix environments are supported. +* The Java executable must have the `cap_net_raw` capability. +* The file `ld.conf` must be edited to overcome the rejection by the dynamic +linker when loading libs from untrusted paths. +* ICMP Echo Requests must not be blocked by the receiving hosts. + +The details of these requirements are explained in the +xref:clusters:failure-detector-configuration.adoc#requirements-and-linuxunix-configuration[Requirements section] of +Hazelcast members' xref:clusters:failure-detector-configuration.adoc#ping-failure-detector[Ping Failure Detector]. + +If any of the above criteria isn't met, then `isReachable` will always +fall back on TCP Echo attempts on port 7. + +An example declarative configuration to use the Ping Failure Detector is +as follows (in the client's configuration XML file, e.g., `hazelcast-client.xml`): + +[tabs] +==== +XML:: ++ +-- +[source,xml] +---- + + ... + + + 1000 + 1000 + 255 + false + 2 + + + ... + +---- +-- + +YAML:: ++ +[source,yaml] +---- +hazelcast-client: + network: + icmp-ping: + enabled: false + timeout-milliseconds: 1000 + interval-milliseconds: 1000 + ttl: 255 + echo-fail-fast-on-startup: false + max-attempts: 2 +---- +==== + +And, the equivalent programmatic configuration: + +[source,java] +---- +ClientConfig config = ...; + +ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig(); +ClientIcmpPingConfig clientIcmpPingConfig = networkConfig.getClientIcmpPingConfig(); +clientIcmpPingConfig.setIntervalMilliseconds(1000) + .setTimeoutMilliseconds(1000) + .setTtl(255) + .setMaxAttempts(2) + .setEchoFailFastOnStartup(false) + .setEnabled(true); +---- + +The following are the descriptions of configuration elements and attributes: + +* `enabled`: Enables the legacy ICMP detection mode, works cooperatively with +the existing failure detector and only kicks-in after a pre-defined period +has passed with no heartbeats from a member. Its default value is `false`. +* `timeout-milliseconds`: Number of milliseconds until a ping attempt is +considered failed if there was no reply. Its default value is *1000* milliseconds. +* `max-attempts`: Maximum number of ping attempts before the member gets +suspected by the detector. Its default value is *3*. +* `interval-milliseconds`: Interval, in milliseconds, between each ping attempt. +1000ms (1 sec) is also the minimum interval allowed. Its default value is *1000* milliseconds. +* `ttl`: Maximum number of hops the packets should go through. +Its default value is *255*. You can set to *0* to use your system's default TTL. + +In the above example configuration, the Ping Failure Detector attempts 2 pings, +one every second, and waits up to 1 second for each to complete. +If there is no successful ping after 2 seconds, the member gets suspected. + +To enforce the xref:clusters:failure-detector-configuration.adoc#requirements-and-linuxunix-configuration[Requirements], +the property `echo-fail-fast-on-startup` can also be set to `true`, in which case Hazelcast fails to start if any of the requirements +isn't met. + +Unlike the Hazelcast members, Ping Failure Detector works always in parallel with +Deadline Failure Detector on the clients. +Below is a summary table of all possible configuration combinations of the Ping Failure Detector. + +|=== +| ICMP| Fail-Fast| Description| Linux| Windows | macOS + +| true +| false +| Parallel ping detector, works in parallel with the configured failure detector. +Checks periodically if members are live (OSI Layer 3) and suspects them immediately, +regardless of the other detectors. +| Supported ICMP Echo if available - Falls back on TCP Echo on port 7 +| Supported TCP Echo on port 7 +| Supported ICMP Echo if available - Falls back on TCP Echo on port 7 + +| true +| true +| Parallel ping detector, works in parallel with the configured failure detector. +Checks periodically if members are live (OSI Layer 3) and suspects them immediately, +regardless of the other detectors. +| Supported - Requires OS Configuration Enforcing ICMP Echo if available - No start up if not available +| Not Supported +| Not Supported - Requires root privileges +|=== + +=== Java Client Failure Detectors + +The client failure detectors are responsible to determine if a member in the cluster is unreachable or crashed. +The most important problem in the failure detection is to distinguish +whether a member is still alive but slow, or has crashed. +But according to the famous http://dl.acm.org/citation.cfm?doid=3149.214121[FLP result^], +it is impossible to distinguish a crashed member from a slow one in an asynchronous system. +A workaround to this limitation is to use unreliable failure detectors. +An unreliable failure detector allows a member to suspect that others have failed, +usually based on liveness criteria but it can make mistakes to a certain degree. + +Hazelcast Java client has two built-in failure detectors: Deadline Failure Detector and +Ping Failure Detector. These client failure detectors work independently of +the member failure detectors, e.g., you do not need to enable the member failure detectors +to benefit from the client ones.