You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 21, 2021. It is now read-only.
Recently while working a Support Ticket, I discovered to my very great surprise that it is possible for an Elasticsearch cluster to grow too big. When a cluster grows too big it effectively stops functioning because every single operation returns a circuit breaker failure. Customers whose production clusters have been functioning perfectly well can suddenly find themselves dead in the water with little or no warning.
Not only did this come to a surprise to me, but it came to a surprise to the customer as well. I think both of us thought in RDBMS terms, whereby a data-heavy cluster might have so much data on disk that every operation would cause hundreds of calls to the disk system and would be very slow ... but would still function. In fact this is not true in the case of an elastic node or cluster.
It turns out that the fact that a cluster can grow to be too big is not documented very clearly anywhere that I can find, and I certainly don't recall any discussion of this in any of the Training classes which I have taken or taught.
In my opinion, this is a serious omission. This is an especially serious omission as customers increasingly operate large clusters which are potentially susceptible to operating limit failures.
I'm attaching the 1st draft of a Support Note I've written on the problem. I suggest the documentation should include the following points:
that every cluster has a maximum operating limit in terms of the amount of indexed data it can support. That when this limit is reached all operations will fail with circuit breaker exceptions and that the cluster will effectively be down.
that the exact limit is difficult to determine and depends on the size of node JVMs, the amount of data being indexed, the nature of the index keys, the amount of field data associated with each index and other factors. The documentation should emphasize that the maximum operating limit is in terabytes of indexed data per node.
what can be done proactively to anticipate reaching a cluster's maximum operating limit, specifically by checking various entries in node statistics
No description provided.
The text was updated successfully, but these errors were encountered: