This repo aims to be resource and real example project for building an application based on microservices by using monix and its sub-projects.
The application simulates the very bare logic of a possible backend for an auction shop.
Where it defines an endpoint for adding Items
and Actions
, an action being Buy
, Sell
or Pawn
.
On the other hand it also supports to fetch items with its actions history by the id, name, category or state.
After having introduced the domain, let's jum to the technical side.
The platform is composed in two services, the dispatcher and worker, and the communication is done via gRPC
and kafka
.
The dispatcher represents the data entrypoint (http) and also acts like a router to dispatching incoming requests to the available workers (gRPC). It is characterized for not implementing any database access, but rather for organising and orchestrating the workload on the workers.
Implements a http server that acts as the data entry point of the platform to add and fetch new items
.
The http routes are pretty minimal:
- POST /item/add
- POST /item/action/buy
- POST /item/action/sell
- POST /item/action/pawn
- GET /item/fetch/{id}
- GET /item/fetch?name={name}
- GET /item/fetch?category={category}
- GET /item/fetch?state={state}
-
Server: The grpc server implementation simply expects
JoinRequest
s from the workers, returningJoinResponse
s which can be eitherJoined
orRejected
. This acts like a very basic protocol for allowing to dynamically add new workers to the quorum, and providing scalability to the platform. Note: Nowadays this could be implemented much more easily by relying on an external discovery service and grpc load balancer. -
Client: The dispatcher also acts as a grpc client that sends transactions, operations and fetch requests to its workers. As explained previously, there is an internal grpc load balancer that randomly chooses a different worker to.
They can be added on demand, scalable depending on the work flow. The grpc protocol is only designed for requesting (reading) data, on the other hand, the data to be written is published into a broker, in which the worker will keep consuming and persisting to the database.
-
Client Just right after starting the app, they will send the JoinRequest grpc call to the dispatcher. Then, if the response is a
Joined
it means that the worker was added to the quorum and that it will start receiving grpc requests. -
Server The grpc server will only start after we have received a
Joined
confirmation from the dispatcher, at that point the worker will be entitled to receivefetch
requests.
The workers are continuously consuming events from the four different kafka topics (item
, buy-actions
, sell-actions
and pawn-actions
) that
will be persisted afterwards to its respective collection in MongoDb
.
As the logic is shared between the four kind of events, it's implementation is generalized in the InboundFlow
type-class.
In order to storing the protobuf
events directly to Mongo
, it was required to define some Codecs.
A microservice that feeds Redis with a list of Fraudulent people that is downloaded from a datasource stored in S3.
- Write tests and create a CI pipeline.
- If a write to the dispatcher failed or took more than X time, the dispatcher will cache that event to redis temporarly, and there will be a scheduled job that checks if there is cached data, and if so, it will try to send them to one of the worker again.
- At the end of each request, we will send to the kafka
long_term_storage
topic. - Refactor the dispatcher and worker apps to read the config files in a safe way, as how currently the feeder does, using
monix.bio.IO
that will and returningIO[ConfigurationErrors, Config]
instead of usingloadOrThrow
.