Skip to content

CNTK Evaluation Overview

Zhou Wang edited this page Dec 14, 2016 · 66 revisions

Once you have trained a model, you need the functionality to evaluate the model in your target environment. As a reminder, there are several ways to create models with CNTK, and there are two different formats to store the model in.

The original CNTK (prior to the CNTK 2.0 version) only supports a format we call now the model-V1 format. It was originally created by using CNTK.EXE with BrainScript. With CNTK 2.0 a Protobuf based format was introduced, which is now known as the model-V2 format. The following table presents an overview on creating the different model formats:

model-creation model-v1 model-v2
CNTK.EXE (BrainScript) YES NO
CNTK-library (Python, C++) deprecated YES

For more details on creating the different model formats refer to the CNTK model format page.

CNTK model evaluation methods

Aside from training a model, CNTK provides different ways of evaluating the model:

  • CNTK.EXE allows loading models and evaluate through BrainScript
  • CNTK-EvalDLL allows evalution of models through C++ and C#. They are distributed through the binary download package or the available NUGET package.
  • CNTK-library allows evaluation throught C++ and Python. The C# support is under development.
    • Support both CPU and GPU device.
    • Support evaluation of multiple requests in parallel by multi-threading.
    • Share model parameters among multiple threads if the same model is loaded. This will significantly reduce memory usage when running evaluation in a service environment.

The following table gives an overview about the evaluation alternatives and the supported model formats

model-evaluation model-v1 model-v2
CNTK.EXE (BrainScript) YES NO
EvalDLL (C++, C#, ASP and Azure) YES NO
CNTK-library (Python, C++) YES YES

The following pages provide detailed information about model evaluation in different scenarios:

Clone this wiki locally