Skip to content

Client Node-Discovery and Auto-configuration (aka "autoconfig") #356

Open
@seancribbs

Description

@seancribbs

Several other clustered datastores (including Couchbase/Membase,
Amazon DynamoDB, MongoDB with the "config server") have ways of
discovering new nodes in the cluster. Since any Riak node can handle
any request, we should be able to advertise the existence of other
nodes to clients so that they can reconfigure automatically, spreading
load more evenly among the Riak nodes. This can reduce operational
burden, because applications will not need to be restarted to take
advantage of a grown cluster. This will also pave the way for the
future work of preflist-hinting or client-side routing.

Client-Server Interaction

The goal of this proposal is to provide as much backward- and forward-
compatibility as possible. As such, existing request/response cycles
are overloaded with new information; the "/" URL on HTTP and the
RpbGetServerInfoResp will be modified for this purpose.

In addition to the one-roundtrip cycle, clients should be able to
stream updates by providing an additional request parameter or
limiting the acceptable media-types. This allows them to receive
cluster changes as they occur and update accordingly, perhaps using a
background thread or reactor loop. It also removes the need to send
reconfiguration messages in-band with other requests.

Implementation Details

PB

The RpbGetServerInfoResp would be modified to add information about
other nodes in the cluster, and RpbGetServerInfoReq would be made a
full message:

// Added message
message RpbGetServerInfoReq {
    optional bool stream = 1 [default=false];
}

message RpbGetServerInfoResp {
    optional bytes node = 1;
    optional bytes server_version = 2;
    // Added field
    repeated RpbNode nodes = 3;
}

message RpbNode {
    // The node name, same as what would be in the original RpbGetServerInfoResp.
    required bytes name = 1;
    repeated RpbClientInterface interfaces = 2;
}

message RpbClientInterface {
    enum RpbProtocol {
        PB = 1;
        HTTP = 2;
        HTTPS = 3;
    }
    required RpbProtocol protocol = 1;
    // This can be a FQDN or a text representation of an IP address.
    required bytes host = 2;
    required uint32 port = 3;    
}

HTTP

The "root" resource riak_core_wm_urlmap would be modified to return
additional information:

  1. A link (in the HTML format) with relation "self" that identifies
    the URL of the current node and its name:

    <a href="http://10.0.1.150:8098/" rel="self">[email protected]</a>

    This could also be represented in a similar fashion in the Link
    response header. If the node also has PB configured, it should
    include similar links using the "alternate" relation.

  2. Links for other known nodes in the cluster, listed in the
    same fashion as the current node but using the "alternate"
    relation.

  3. If given the query string parameter stream=true, then updates to
    the configuration will be streamed back to the client.

The JSON format of riak_core_wm_urlmap would change like so:

{
  "riak_kv_wm_buckets": "/riak",
  "riak_kv_wm_counter": "/buckets",
  "riak_kv_wm_index": "/buckets",
  "riak_kv_wm_keylist": "/buckets",
  "riak_kv_wm_link_walker": "/riak",
  "riak_kv_wm_mapred": "/mapred",
  "riak_kv_wm_object": "/riak",
  "riak_kv_wm_ping": "/ping",
  "riak_kv_wm_props": "/buckets",
  "riak_kv_wm_stats": "/stats",
  "riak_solr_indexer_wm": "/solr",
  "riak_solr_searcher_wm": "/solr",
  // Added below here
  "[email protected]":{
      "interfaces":[
          "http://10.0.1.150:8098/",
          "pbc://10.0.1.150:8087/"
      ]
  }, 
  "[email protected]":{
      "interfaces":[
          "http://10.0.1.151:8098",
          "pbc://10.0.1.151:8087"
      ]
  }
  // u.s.w.
}    

Server Implementation

A new dedicated process on each node, probably called
riak_api_autoconfig, will be added to broadcast, cache, and serve
listener information among the cluster members. When receiving an
update from another node, the process will update its internal cache
and notify client-socket processes if the state has changed. Client
processes will filter the information according to the addresses that
are reachable from the peer (CIDR).

Existing code from riak_repl and riak_core will be repurposed or
refactored to support detecting and filtering interfaces.
Additionally, base HTTP support will likely be moved into riak_api
so that client-interface configuration can live in a single place.

TBD: Autoconfig information might also be affected by authorization, see #355.

Risks/Problems

  1. User/sysadmin can configure the client to connect to Riak through a
    proxy-load-balancer (e.g. HAProxy), in a sense making this feature
    unneeded. The client should have an option to turn off or ignore
    autoconfig, as well as disabling it server-side.
  2. Hostname/IP mappings should be identified at startup, or
    inet_gethost calls could needlessly dominate, assuming someone
    bound an interface to a hostname rather than IP address. This could
    get even stickier if administrators use round-robin DNS or VIP,
    in which case autoconfig should likely be disabled.
  3. We should be careful about sending autoconfig information over the
    wire when a node is not starting up or cleanly shutting down.
    Network partitions between nodes could cause flappy and undesirable
    behavior in clients.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions