Skip to content

Latest commit

 

History

History
105 lines (74 loc) · 3.76 KB

README.md

File metadata and controls

105 lines (74 loc) · 3.76 KB

Order Book

An aggregated Order Book in Erlang, which maintains total Qtys for each Price level for bid and ask sides. The events coming (somehow) from an exchange:

{new_order, OrderID :: integer(), bid | ask, Price :: number(), Qty :: integer()} ,
{delete_order, OrderID :: integer()} ,
{modify_order, OrderID :: integer(), NewQty :: integer() }

It is assumed, that the exchange is responsible for matching orders and the results are reflected in the events to the Order Book (above).

Hints

Note: implemented in Erlang 21.

  • compile: make compile

  • dialyze make dialyze

  • xref make xref

  • lint (requires elvis): make lint

  • test make test

  • release make release

TODO

  • Add more sophisticated tests with a load profile.
  • Add REST interface to the application.
  • Dockerize.

Optimization

Since order_book_instrument server is a natural bottleneck for many order_book_exchange servers it can be:

  1. Split into 2 servers: for bid and ask orders. The cross, if happens, should not be huge and expensive to merge.
  2. Sharded into a number of servers, each responsible for handling it's own book depth range. The pool of those servers can dinamically follow the moving top of the book and changes in book depth.

Provide Order Book state for any timestamp

Implementation sketch

  1. order_book_exchange: add a table:

    Updates :: {Timestamp :: unixtime(), Update :: {Price :: number, Type :: bid | ask, QtyDiff :: integer()}}.

    with entry for each order. Send the Timestamp to order_book_instrument together with exchange id and update data via update_instrument function.

  2. order_book_instrument: add a map to the state:

    LastUpdates :: #{Exchange :: order_book_exchange:id() => Timestamp}.

    containing timestamps received in the last update from each order_book_exchange server.

    Periodically cache current state of the order book table to a table:

    Cache :: {Timestamp, CacheTabID, LastUpdates}
    1. Create a new entry in the Cache table with the current Timestamp, LastUpdates and CacheTabID = undefined.
    2. Fork a new process, which will create a copy of the current order book table and give_away back to the order_book_instrument server when finished.
    3. Until ownership transfer for the new table is not received, store all the coming updates to a temporary table.
    4. On receiveing the message apply the updates from the temporaty table to the main one; update the last entry in Cache table with received CacheTabID.
    5. Remove the temporaty table.

    Update existing get_book/1 and add a new api function get_book/2:

    -spec get_book(id()) ->
        book().
    get_book(ID) ->
        MainTabID = get_tab_id(book, ID),
        Book = ets:tab2list(MainTabID),
        UpdatesTab = get_tab_id(updates, ID)
        case ets:info(UpdatesTab) of
            undefined -> Book;
            _         -> apply_updates(UpdatesTab, Book)
        end.
    
    -spec get_book(id(), unixtime()) ->
        book().
    get_book(ID, Timestamp) ->
        Tab = get_tab_id(cache, ID),
        {InstrTimestamp, CacheTabID, LastUpdates} = find_proximate(Timestamp, Tab),
        Direction = get_direction(InstrTimestamp, Timestamp),
        Cache = ets:tab2list(CacheTabID),
        maps:fold(
            fun({Exch, ExchTimestamp}, Acc) ->
                UpdTab = order_book_exchange:get_tab_id(updates, Exch),
                apply_exchange_updates(Direction, ExchTimestamp, Timestamp, UpdTab, Acc)
            end,
            Cache,
            LastUpdates
        ).
  3. order_book: add an interface api function for the order_book_instrument:get_book/2 above.