Skip to content

Latest commit

 

History

History
57 lines (45 loc) · 2.23 KB

20_Add_failover.asciidoc

File metadata and controls

57 lines (45 loc) · 2.23 KB

Add failover

Running a single node means that you have a single point of failure — there is no redundancy. Fortunately all we need to do to protect ourselves from data loss is to start another node.

Starting a second node

To test out what happens when you add a second node, you can start a new node in exactly the same way as you started the first one (see [running-elasticsearch]), and from the same directory — multiple nodes can share the same directory.

As long as the second node has the same cluster.name as the first node (see the ./config/elasticsearch.yml file), it should automatically discover and join the cluster run by the first node. If it doesn’t, check the logs to find out what went wrong. It may be that multicast is disabled on your network, or there is a firewall preventing your nodes from communicating.

If we start a second node, our cluster would look like A two-node cluster — all primary and replica shards are allocated.

A two-node cluster
Figure 1. A two-node cluster — all primary and replica shards are allocated

The second node has joined the cluster and three replica shards have been allocated to it — one for each primary shard. That means that we can lose either node and all of our data will be intact.

Any newly indexed document will first be stored on a primary shard, then copied in parallel to the associated replica shard(s). This ensures that our document can be retrieved from a primary shard or from any of its replicas.

The cluster-health now shows a status of green, which means that all 6 shards (all 3 primary shards and all 3 replica shards) are active:

{
   "cluster_name":          "elasticsearch",
   "status":                "green", (1)
   "timed_out":             false,
   "number_of_nodes":       2,
   "number_of_data_nodes":  2,
   "active_primary_shards": 3,
   "active_shards":         6,
   "relocating_shards":     0,
   "initializing_shards":   0,
   "unassigned_shards":     0
}
  1. Cluster status is green.

Our cluster is not only fully functional but also always available.