Skip to content

CNTK Evaluation Overview

Zhou Wang edited this page Nov 18, 2016 · 66 revisions

Once you have trained a model, you need the functionality to evaluate the model in your target environment. As a reminder, there are several ways to create models with CNTK, and there are two different formats to store the model in.

The original CNTK (prior to the CNTK2 version) only supports a format we call now the model-V1 format. It was originally created by using CNTK.EXE with BrainScript. With CNTK2 a Protobuf based format was introduced, which is now known as the model-V2 format. The following table presents an overview on creating the different model formats:

model-creation model-v1 model-v2
CNTK.EXE (Brainscript) YES NO
CNTK-libaray (Pythion, C#) YES YES

For more details on creating the different model formats refer to the CNTK model format page.

CNTK model evaluation methods

Aside from training a model, CNTK provides different ways of evaluating the model:

  • CNTK.EXE allows loading models and evaluate through BrainScript
  • CNTK-EvalDLL allows evalution of models through C++ and C#. They are distributed through the binary download package or the available NUGET package.
  • CNTK-library allows evaluation throught C++ and Python. The C# support is under development.
    • Support both CPU and GPU device.
    • Support evaluation of multiple evaluation requests in parallel by multi-threading.
    • Share model parameters among multiple threads if the same model is loaded. This could significantly reduce memory usage when running evaluation in a service environment.

The following table gives an overview about the evaluation alternatives and the supported model formats

model-evaluation model-v1 model-v2
CNTK.EXE (Brainscript) YES NO
EvalDLL (C++, C#, ASP and Azure) YES NO
CNTK-libaray (Python, C++) YES YES

The following pages provide detailed information about model evaluation in different scenarios:

Clone this wiki locally