Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update distributed_deployment.md #1281

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1159,6 +1159,15 @@ client.CreateCollection(context.Background(), &qdrant.CreateCollection{

Write operations will fail if the number of active replicas is less than the `write_consistency_factor`.

The configuration of the write_consistency_factor is important for adjusting the cluster's behavior when some nodes go offline due to restarts, upgrades, or failures.

By default, the cluster continues to accept updates as long as at least one replica of each shard is online. However, this behavior means that once an offline replica is restored, it will require additional synchronization with the rest of the cluster. In some cases, this synchronization can be resource-intensive and undesirable.

Setting the write_consistency_factor to match the replication factor modifies the cluster's behavior so that unreplicated updates are rejected, preventing the need for extra synchronization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would be nice to be a little bit more concrete on what happens:

Suggested change
If the update is applied to enough replicas - according to the `write_consistency_factor` - the update will return a successful status. Any replicas that failed to apply the update will be temporarily disabled and are automatically recovered to keep data consistency. If the update could not be applied to enough replicas, it'll return an error and may be partially applied. The user must submit the operation again to ensure data consistency.

Here I describe that the update will return a successful status if it was applied to enough replicas. That is not necessarily true if there are consensus problems. But I don't think we've to describe that edge case here.

For asynchronous updates and injection pipelines capable of handling errors and retries, this strategy might be preferable.


### Read consistency

Read `consistency` can be specified for most read requests and will ensure that the returned result
Expand Down
Loading