An aggregated Order Book in Erlang, which maintains total Qtys for each Price level for bid and ask sides. The events coming (somehow) from an exchange:
{new_order, OrderID :: integer(), bid | ask, Price :: number(), Qty :: integer()} ,
{delete_order, OrderID :: integer()} ,
{modify_order, OrderID :: integer(), NewQty :: integer() }
It is assumed, that the exchange is responsible for matching orders and the results are reflected in the events to the Order Book (above).
Note: implemented in Erlang 21.
-
compile:
make compile
-
dialyze
make dialyze
-
xref
make xref
-
lint (requires elvis):
make lint
-
test
make test
-
release
make release
- Add more sophisticated tests with a load profile.
- Add REST interface to the application.
- Dockerize.
Since order_book_instrument server is a natural bottleneck for many order_book_exchange servers it can be:
- Split into 2 servers: for bid and ask orders. The cross, if happens, should not be huge and expensive to merge.
- Sharded into a number of servers, each responsible for handling it's own book depth range. The pool of those servers can dinamically follow the moving top of the book and changes in book depth.
Implementation sketch
-
order_book_exchange: add a table:
Updates :: {Timestamp :: unixtime(), Update :: {Price :: number, Type :: bid | ask, QtyDiff :: integer()}}.
with entry for each order. Send the
Timestamp
to order_book_instrument together with exchange id and update data via update_instrument function. -
order_book_instrument: add a map to the state:
LastUpdates :: #{Exchange :: order_book_exchange:id() => Timestamp}.
containing timestamps received in the last update from each order_book_exchange server.
Periodically cache current state of the order book table to a table:
Cache :: {Timestamp, CacheTabID, LastUpdates}
- Create a new entry in the
Cache
table with the currentTimestamp
,LastUpdates
andCacheTabID
=undefined
. - Fork a new process, which will create a copy of the current order book table and give_away back to the order_book_instrument server when finished.
- Until ownership transfer for the new table is not received, store all the coming updates to a temporary table.
- On receiveing the message apply the updates from the temporaty table to the main one; update the last entry in
Cache
table with receivedCacheTabID
. - Remove the temporaty table.
Update existing
get_book/1
and add a new api functionget_book/2
:-spec get_book(id()) -> book(). get_book(ID) -> MainTabID = get_tab_id(book, ID), Book = ets:tab2list(MainTabID), UpdatesTab = get_tab_id(updates, ID) case ets:info(UpdatesTab) of undefined -> Book; _ -> apply_updates(UpdatesTab, Book) end. -spec get_book(id(), unixtime()) -> book(). get_book(ID, Timestamp) -> Tab = get_tab_id(cache, ID), {InstrTimestamp, CacheTabID, LastUpdates} = find_proximate(Timestamp, Tab), Direction = get_direction(InstrTimestamp, Timestamp), Cache = ets:tab2list(CacheTabID), maps:fold( fun({Exch, ExchTimestamp}, Acc) -> UpdTab = order_book_exchange:get_tab_id(updates, Exch), apply_exchange_updates(Direction, ExchTimestamp, Timestamp, UpdTab, Acc) end, Cache, LastUpdates ).
- Create a new entry in the
-
order_book: add an interface api function for the
order_book_instrument:get_book/2
above.