Skip to content

RedisAI/benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is a very OLD benchmarking suite built for experimenting. The numbers/results shown here are not valild anymore. RedisAI has grown quite a lot since then. Head over to the new benchmarking suite for the latest results

benchmarks

This repo aims to benchmark RedisAI against everything else which includes

  • RedisAI with Tensorflow as backend vs Tensorflow python runtime
  • RedisAI with PyTorch as backend vs PyTorch python runtime
  • RedisAI vs GRPC servers (Tensorflow Serving & custom GRPC server for PyTorch and ONNX)
  • RedisAI vs Flask servers
  • RedisAI with ONNX as backend vs ONNXRuntime (upcoming)
  • RedisAI vs MxNet Model Server (upcoming)
  • RedisAI vs TensorRT (upcoming)

Models used in for the benchmark can be found here and the docker images can be found in the docker hub

Current Results

  • OS Platform and Distribution: Linux Ubuntu 18.04
  • Device: CPU
  • Python version: 3.6
  • Tensorflow version: 1.12.0
  • TensorFlow optimizations: ON

RedisAI Benchmarking resnet running on pytorch:cpu

RedisAI Benchmarking resnet running on tensorflow:cpu

(grpc for tensorflow is TFServing)

Run experiments

For running the benchamarks (right now it runs with Tensorflow which is built with optimizations ON, if your hardware doesn't support it, keen an eye on this repo, we'll update it soon with prebuilt binary of TF available in pypy), cd to the repo and run

python run.py

This will pull the required docker images, bring it up when requires and run the clients for each. Available command line configurations are

usage: run.py [-h] [--device {cpu,gpu,all}]
              [--backend {tensorflow,pytorch,all}] [--count COUNT] [--exp ...]

optional arguments:
  -h, --help            show this help message and exit
  --device {cpu,gpu,all}
                        Run benchmarking for CPU or GPU or both
  --backend {tensorflow,pytorch,all}
                        Run benchmarking for tensorflow or pytoch or onnx
  --count COUNT         How many iterations to take average from, for each
                        experiment
  --exp ...

--exp is for specifying experiments which are currently native, flask, redisai, grpc. Make sure --exp is specified at the end of your command. An example command to run only redisai and flask of pytorch, only two times is below

python run.py --backend pytorch --count 2 --exp flask redisai

About

Comparing RedisAI with everything else

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages