Stream data from two sources where one is eventually consistent and the other one loses its tail
This library provides ability to use Kafka as storage for events.
Kafka is a perfect fit in case you want to have streaming capabilities for your events.
However, it also uses Cassandra to keep data access performance on acceptable level and
overcome Kafka's retention policy
.
Cassandra is a default choice, but you may use any other storage which satisfies following interfaces:
- Read side, called within client library: EventualJournal
- Write side, called from replicator app: ReplicatedJournal
- Journal client publishes events to Kafka
- Replicator app stores events in Cassandra
- Client publishes special marker to Kafka, so we can make sure there are no more events to expect
- Client reads events from Cassandra, however at this point we are not yet sure that all events are replicated from Kafka to Cassandra
- Client reads events from Kafka using offset of last event found in Cassandra
- We consider recovery finished when marker is read from Kafka
- Kafka topic may be used for many different entities
- We don't need to store all events in Kafka as long as they are in Cassandra
- We do not cover snapshots, yet
- Replicator is a separate application
- It is easy to replace Cassandra with some relational database
Performance of reading events depends on finding the closest offset to the marker as well on replication latency (time difference between the moment event has been written to Kafka and the moment when event becomes available from Cassandra).
Same Kafka consumer may be shared for many simultaneous recoveries.
- Client allowed to
read
+write
Kafka andread
Cassandra - Replicator allowed to
read
Kafka andread
+write
Cassandra
Hence, we recommend configuring access rights accordingly.
Kafka client tends to log some exceptions at error
level, however in reality those are harmless in case if operation
retried successfully. Retriable exceptions usually extend RetriableException.
List of known "error" cases which may be ignored:
- Offset commit failed on partition .. at offset ..: The request timed out.
- Offset commit failed on partition .. at offset ..: The coordinator is loading and hence can't process requests.
- Offset commit failed on partition .. at offset ..: This is not the correct coordinator.
- Offset commit failed on partition .. at offset ..: This server does not host this topic-partition.
In order to use kafka-journal
as akka persistence plugin
you have to add following to your *.conf
file:
akka.persistence.journal.plugin = "evolutiongaming.kafka-journal.persistence.journal"
Unfortunately akka persistence snapshot
plugin is not implemented, yet.
addSbtPlugin("com.evolution" % "sbt-artifactory-plugin" % "0.0.2")
libraryDependencies += "com.evolutiongaming" %% "kafka-journal" % "3.4.0"
libraryDependencies += "com.evolutiongaming" %% "kafka-journal-persistence" % "3.4.0"
libraryDependencies += "com.evolutiongaming" %% "kafka-journal-replicator" % "3.4.0"
libraryDependencies += "com.evolutiongaming" %% "kafka-journal-eventual-cassandra" % "3.4.0"
- Jan 2019 Riga Scala Community
- Apr 2019 Amsterdam.scala
To run unit-test, have to have Docker environment running (Docker Desktop, Rancher Desktop etc.). Some tests expect
to have /var/run/docker.sock
available. In case of Rancher Desktop, one might need to amend local setup with:
sudo ln -s $HOME/.rd/docker.sock /var/run/docker.sock