Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not unban replicas if a primary is available #843

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

magec
Copy link
Collaborator

@magec magec commented Oct 30, 2024

Our current infrastructure consists in several shards composed by 1 primary and 1 replica each (some shards have more than one replica but this is irrelevant here). Our PgCat configuration permits accessing to each of them (we do not leverage on PgCat for load balancing/etc), i.e. we declare 1 pool per server. Something like:

# Shard 0

[pools.db_x0_rw]
...
[pools.db_x0_rw.shards.0]
servers = [
    [ "db_x0_primary.example.com.", 5432, "primary" ],
]

[pools.db_x0_ro]
....
[pools.db_x0_ro.shards.0]
servers = [
    [ "db_x0_replica.example.com.", 5432, "primary" ],
]
...

This is because we cannot use automatic query load balancing due to latency on the replica. Given that our applications already had the notion of 'read only replicas' this schema has been working for us for quite some time.

Now, we are looking for leveraging pgcat to do automatic failover when a replica fails, the approach here would be configuring like:

# Shard 0

[pools.db_x0_rw]
...
[pools.db_x0_rw.shards.0]
servers = [
    [ "db_x0_primary.example.com.", 5432, "primary" ],
]

[pools.db_x0_ro]
default_role = "replica"
....
[pools.db_x0_ro.shards.0]
servers = [
    [ "db_x0_primary.example.com.", 5432, "primary" ],
    [ "db_x0_replica.example.com.", 5432, "replica" ],
]
...

This way, given that db_x0_ro has a default_role of replica it will always use the replica (which is intended), and the primary will only be used when replica is banned.

This is what I would expect as normal behavior given this configuration but the code is defending itself from 'all replicas being banned' and it automatically unbans the replica.

I understand that this is a fail safe mechanism to defend pgcat against false positives on replicas being down. My point is that, if we have a primary configured for this pool, we can afford having all replicas banned for a while, once the ban_timeout is reached they will be rechecked and added back to the pool.

This change loosens up this restriction to be only done when the pool has no primaries at all.

@drdrsh
Copy link
Collaborator

drdrsh commented Oct 30, 2024

@magec We had incidents in the past where this failover to primary behavior made matters worse so I am wondering if we should put this behind a config or only enable that behavior if primary_reads_enabled flag is enabled.

@drdrsh
Copy link
Collaborator

drdrsh commented Oct 30, 2024

Reading through your example, you probably want primary_reads_enabled to be set to false to avoid sending reads to primary under normal circumstances, so it may not be used to gate that feature

@magec
Copy link
Collaborator Author

magec commented Oct 30, 2024

We have query_parser_enabled set to false so the primary_reads thing wont have any effect. We dont use query parser because we dont want pgcat to decide where to send the query our services have that into account.

The behavior we want was achieved by setting default_role to replica. When that is set, queries are not send to primary at all, the problem is that, when replica goes down, it bans it, but then it gets unbanned right away because "all replicas are banned lets unban them to prevent false positives" thing.

With the change, everything goes as expected, primary works as a failover of the replica, and replica gets retried every "ban_time" seconds.

What issues did you encounter with primaries? I dont fully understand. Another approach, as you mention, is making "unban all teplicas when all banned" configurable and true by default (current behavior) that would also do the trick.

That said, I think this use-case should be supported, we want protect ourselves from host errors on replicas.

@magec
Copy link
Collaborator Author

magec commented Oct 31, 2024

@drdrsh I changed the PR so now is a configuration change. I still would like to add an integration test for this, but just wanted to know whether we were aligned.

@drdrsh
Copy link
Collaborator

drdrsh commented Oct 31, 2024

That looks good to me.
I am not sure about the name of the config but the general principle LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants