Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maxwellHA on zookeeper #1948

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 33 additions & 4 deletions src/main/java/com/zendesk/maxwell/MaxwellHA.java
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,17 @@
import org.apache.curator.framework.recipes.leader.LeaderLatch;
import org.apache.curator.framework.recipes.leader.LeaderLatchListener;
import org.jgroups.JChannel;
import org.jgroups.protocols.raft.Log;
import org.jgroups.protocols.raft.Role;
import org.jgroups.raft.RaftHandle;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.net.InetAddress;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

/**
* Class that joins a jgroups-raft cluster of servers or zookeeper
Expand Down Expand Up @@ -115,6 +120,10 @@ public void startHAJGroups() throws Exception {
* @throws Exception
*/
public void startHAZookeeper() throws Exception {

Lock lock = new ReentrantLock();
String hostAddress = InetAddress.getLocalHost().getHostAddress();

String electPath = "/" + clientID + "/services";
String masterPath = "/" + clientID + "/leader";
CuratorUtils cu = new CuratorUtils();
Expand All @@ -128,28 +137,48 @@ public void startHAZookeeper() throws Exception {
cu.setMasterPath(masterPath);
cu.init();
CuratorFramework client = cu.getClient();
LeaderLatch leader = new LeaderLatch(client, cu.getElectPath());
LeaderLatch leader = new LeaderLatch(client, cu.getElectPath(),hostAddress,LeaderLatch.CloseMode.NOTIFY_LEADER);
leader.start();
LOGGER.info("this node is participating in the election of the leader ....");
LOGGER.info("this node:" + hostAddress + " is participating in the election of the leader ....");
leader.addListener(new LeaderLatchListener() {
@Override
public void isLeader() {
try {
lock.lock();
cu.register();
} catch (Exception e) {
e.printStackTrace();
LOGGER.error("The node registration is abnormal, check whether the maxwell host communicates properly with the zookeeper network");
cu.stop();
System.exit(1);
}finally {
lock.unlock();
}
LOGGER.info("node is current leader, starting Maxwell....");
LOGGER.info("node:" + hostAddress + " is current leader, starting Maxwell....");
LOGGER.info("hasLeadership = " + leader.hasLeadership());

run();

try {
leader.close();
} catch (IOException e) {
e.printStackTrace();
}
cu.stop();
}

@Override
public void notLeader() {
//LeaderLatch.CloseMode.SILENT mode will not invoke this method
try {
lock.lock();
LOGGER.warn("node:" + hostAddress + " lost leader");
LOGGER.warn("master-slave switchover......");
LOGGER.warn("The leadership went from " + hostAddress + " to " + leader.getLeader());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this case shut down the current maxwell process given that it has lost the leadership status?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you mean

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I mean by the time we get this call, it means the current Maxwell process has lost leadership. That means we should probably stop the process altogether or shut down the replicator and go back into the election mode waiting for our turn once again.

Logging a warn does nothing and means we are going to keep replicator threads running and pumping duplicate data into whatever producer is configured. Additionally, the positions store will start getting conflicting writes from two different processes.

@osheroff Do I understand it correctly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand. I have tested this point. When I try to kill the process, the code will go straight to this location and print the results we want, and also print the node information of the next leader. If there are other exceptions that make it impossible to execute this code, please tell me and I will solve it

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll leave it to Ben to make a call on what should happen in this case, but I feel simply shutting down the process gracefully may be the easiest way to avoid conflicts after a leadership loss. Alternatively, it may be possible to call maxwell.terminate(); the way the JGroups-based HA implementation does it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, when the leader becomes a follower, it only means that the zookeeper connection is abnormal or the current node does not meet the conditions of the leader. During this period, the program does not care about maxwell connector status (of course, maxwell status monitoring can be added later). It's a simple switch from one node to another. If the zookeeper connection problem causes the master/slave switchover, the program will not quit, but become a follower. Next, I will add maxwell to the indicator monitoring in iterations to determine whether to switch the master/slave according to these indicators, which requires Ben @osheroff guidance

Copy link

@kovyrin kovyrin Feb 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During this period, the program does not care about maxwell connector status
If we have lost ZK connection and, consequently, lost the leader status, then the current Maxwell instance will at the very least start producing duplicate events since the other instance that is the leader now is replicating the same set of changes already. Additionally, there is a chance of both instances overwriting each other's position information in maxwell's database, which AFAIU can have negative consequences as well.

If the zookeeper connection problem causes the master/slave switchover, the program will not quit, but become a follower

What do you feel a should it mean for a Maxwell instance to become a follower? (AFAIU, there is no notion of a follower mode in current maxwell codebase)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example,if I start three maxwell instances,we label them as 1,2,3, where 1 is the leader and 2,3 are the followers. When 1 is the leader,2,3 is just a daemon process that doesn't do anything. When 1 exit the leader(not the exit caused by maxwell,but the server failure:For example,exit caused by restart,memory overflow,disk space, etc.),then 2 or 3 takes over from 1 to continue the collection task. If the maxwell process caused by mysql exits,no matter how many instances are started, the problem still presists. This is not something that can be fixed by high availability. What I need to do is to ensure that maxwell itself is highly available.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your solution absolutely solves the scenarios described. One thing I feel it is missing (or I may be confused!) is a scenario when a leader, doing its leader stuff, replicating data data, etc, loses its leadership while remaining alive and seemingly healthy (due to ZK connectivity issues, ZK restart, any other issues that force a new election). In those cases the old leader needs to step down and stop doing its usual leader things and move to a quiet follower mode (stop binlog replicator, don't write into the position store anymore, etc).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested the scenario you described and got corresponding results. Please let me know if you have any other problems

}catch (Exception e){
e.printStackTrace();
}finally {
lock.unlock();
}
}
});

Expand Down