Skip to content
Merged
2 changes: 2 additions & 0 deletions docs/modules/clients/pages/client-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,3 +90,5 @@ For help getting started with the Hazelcast clients, see the client tutorials in
For details about using Memcache to communicate directly with a Hazelcast cluster, see xref:memcache.adoc[Memcache].

For information about using the REST API for simple operations, see: xref:rest.adoc[REST].

For networking information, see xref:clusters:network-configuration.adoc[].
11 changes: 8 additions & 3 deletions docs/modules/clients/pages/java.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1917,9 +1917,14 @@ See xref:extending-hazelcast:discovery-spi.adoc[Discovery SPI] for more informat
|Enables the discovery joiner to use public IPs from `DiscoveredNode`.
See xref:extending-hazelcast:discovery-spi.adoc[Discovery SPI] for more information.
When set to `true`, the client assumes that it needs to use public IP addresses reported by the members.
When set to `false`, the client always uses private addresses reported by the members. If it is `null`,
the client will try to infer how the discovery mechanism should be based on the reachability of the members.
This inference is not 100% reliable and may result in false-negatives.
When set to `false`, the client always uses private addresses reported by the members.
If not set, the client attempts to infer which address type to use by testing member reachability.
The inference checks only a small sample of members (3 by default) with fixed timeouts, so it is
not 100% reliable and may produce incorrect results in environments with variable network latency
or partial connectivity. Additionally, inference is skipped when SSL/TLS is enabled, defaulting to
private addresses.
When members are configured with xref:clusters:network-configuration.adoc#public-address[public addresses],
and you want the clients to use public IP addresses, we recommend explicitly setting this property to `true` rather than relying on inference.

|`hazelcast.client.event.queue.capacity`
|1000000
Expand Down
70 changes: 62 additions & 8 deletions docs/modules/clusters/pages/network-configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,26 @@ available configurations that you can perform under the `network` element.
[[public-address]]
== Public Address

`public-address` overrides the public address of a member. By default, a member
selects its socket address as its public address. But behind a network address translation (NAT),
two endpoints (members) may not be able to see/access each other.
If both members set their public addresses to their defined addresses on NAT,
then that way they can communicate with each other. In this case, their public addresses
are not an address of a local network interface but a virtual address defined by NAT.
It is optional to set and useful when you have a private cloud.
Note that, the value for this element should be given in the format *`host IP address:port number`*.
`public-address` overrides the address that a member advertises to other members
and clients. By default, a member advertises its socket address.

`public-address` is a Hazelcast configuration value (`host:port`), not a specific
infrastructure object such as a "public IP address". It can be any reachable
address or hostname that other members and clients should use to connect.

Common use cases include:

* **NAT environments**: When members are behind NAT, set the public address to the
externally visible `host:port` so that other members and clients can reach them.
* **Private cloud deployments**: When the member's socket address is not directly
accessible from outside the private network.

The value can be specified as either:

* A public IP address with port: `11.22.33.44:5555`
* A private IP address with port: `10.10.1.25:5701`
* A hostname with port: `member1.example.com:5555`

See the following examples.

[tabs]
Expand Down Expand Up @@ -55,6 +67,48 @@ config.getNetworkConfig()
----
====

=== How Public Address Affects Hazelcast Features

The `public-address` setting controls which endpoint a member advertises. Members
still bind and listen on their local bind address, but peers and clients connect
using advertised addresses. Understanding this distinction is important when
configuring your cluster:

[cols="1,3"]
|===
|Feature |Behavior with Public Address

|**Member-to-member communication**
|Members listen on their bind address, but discover and connect to each other
using advertised addresses. If `public-address` is set, other members use that
value.

|**Client connections**
|Clients can use the public address to connect to members. This requires either
explicitly enabling `hazelcast.discovery.public.ip.enabled` on the client, or
relying on automatic inference (see <<client-public-address-discovery>>).

|**WAN Replication**
|Uses defined member addresses. Configure target cluster endpoints explicitly
in WAN replication configuration.
|===

[[client-public-address-discovery]]
=== Client Public Address Discovery

For clients to use members' public addresses, configure the
`hazelcast.discovery.public.ip.enabled` property on the client:

* `true`: The client always uses public addresses reported by members. If the member does not declare
a public address, the client falls back to private IPs as reported by the member list.
* `false`: The client always uses private (internal) addresses reported by members.
* Not set (default): The client attempts to infer which address type to use based on
reachability. See xref:clients:java.adoc#client-system-properties[Client System Properties]
for details on inference behavior and its limitations.

NOTE: We recommend explicitly setting `hazelcast.discovery.public.ip.enabled` to `true`
on clients when members are configured with public addresses, rather than relying on
automatic inference.

[[port]]
== Port
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/cp-subsystem/pages/best-practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@

* Distribute CP group members across three data centers to balance resilience and performance. For example, a group of seven members with a 3/3/1 split.

* Minimize latency across your CP Subsystem deployment. Throughput is limited by the latency between the CP group leader and the slowest follower used for quorum calculations.
* Minimize latency across your CP Subsystem deployment. Throughput is limited by the latency between the CP group leader and the slowest follower used for quorum calculations.