This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. Visit http://sss.projects.itu.dk/ann-benchmarks to see the results of this benchmark.
Note: The main development of ann-benchmarks can be found at https://github.com/erikbern/ann-benchmarks. The version provided here is what has been used for the paper ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms, M. Aumüller, E. Bernhardsson, A. Faithfull http://www.itu.dk/people/maau/additional/sisap2017-preprint.pdf.
- Annoy
- FLANN
- scikit-learn: LSHForest, KDTree, BallTree
- PANNS
- NearPy
- KGraph
- NMSLIB (Non-Metric Space Library): SWGraph, HNSW, BallTree, MPLSH
- RPForest
- FALCONN
- FAISS
- DolphinnPy
- Dolphinn
- all of the above algorithms work in Hamming space
- Annoy-Hamming
- Multi-Index Hashing (MIH)
- SIFT
- GIST
- MNIST
- NYTimes TFIDF
- Random datasets
- GloVe
- Random datasets
We used Spherical hashing to generate Hamming space versions of
- SIFT
- NYTimes
We use the following three datasets from http://ssjoin.dbresearch.uni-salzburg.at/
- Flickr
- AOL
- Kosarek
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.
Clone the repo and run bash install.sh
. This will install all libraries. It could take a while. It has been tested in Ubuntu 16.04. We advice to run it only in a VM or in a docker container (see our Dockerfile).
Downloading and preprocessing the data sets is done by running the install/data-*.sh scripts, e.g., run bash install/data-glove.sh
or bash install/data-sift.sh
.
Running a set of algorithms with specific parameters works:
- Check that
algos.yaml
contains the parameter settings that you want to test - To run experiments on SIFT, invoke
python ann_benchmarks/main --dataset sift-data --query-dataset sift-query --distance euclidean
. Seepython ann_benchmarks/main --help
for more information on possible settings. Note that experiments can take a long time. - To process the results, either use
python plot.py
orpython createwebsite.py
. Thewebsite/
directory contains CSS and JS files necessary for displaying the websites generated bycreatewebsite.py
. An example call:python createwebsite.py --plottype recall/time --latex --scatter --outputdir website/
.
You have two choices to include your own algorithm. If your algorithm has a Python wrapper (or is entirely written in Python), then all you need to do is to add your algorithm into ann_benchmarks/algorithms
by providing a small wrapper.
If your algorithm does not provide a Python wrapper, you can include it using the SubProcess system. Find a detailed documentation on how to do it here [TBD], or checkout the wrappers written for Annoy-Hamming, Dolphinn, and MIH in the
install
directory.
- Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
- In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
- This is meant to be an ongoing project and represent the current state.
- Make everything easy to replicate, including installing and preparing the datasets.
- Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
- High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
- No batching of queries, use single queries by default. ANN-Benchmarks saturates CPU cores by using a thread pool.
- Avoid extremely costly index building (more than several hours).
- Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
- We currently support CPU-based ANN algorithms. GPU support is planned as future work.
- Do proper train/test set of index data and query points.
See http://sss.projects.itu.dk/ann-benchmarks.
Note that NMSLIB saves indices in the directory indices. If the tests are re-run using a different seed and/or a different number of queries, the content of this directory should be deleted.
The project is fully tested using Travis, with unit tests run for all different libraries and algorithms.
- sim-shootout by Radim Řehůřek
- This blog post