-
Notifications
You must be signed in to change notification settings - Fork 4.3k
CNTK_1_7_1_Release_Notes
Allison Brucker (Resources Online) edited this page May 30, 2017
·
4 revisions
This page has migrated to our new site. Please update any bookmarks.
This is a summary on what's new in CNTK 1.7.1 Binary Release.
The are two breaking changes in this release. Please, read this section carefully:
- Layers library default initialization was changed from
heNormal
toglorotNormal
. Passinit=”heNormal”
to get 1.7 behaviour. -
fsAdagrad
had a bug. Learning rates must be retuned. To somewhat approximate the old behaviour, scale bysqrt(number of parameter tensors)/400
.
We have the following improvements in BrainScript.
- BrainScript now allows relational operators inline, and scalar constants get automatically casted to
Constant()
. Example:
HammingLoss (y, p) = ReduceSum (y != (p > 0.5))
compare this to the previous syntax
HammingLoss (y, p) = ReduceSum (NotEqual (y, (Greater (p, Constant(0.5)))))
-
edit
action can now use BrainScript.
The following changes and improvements are introduced in V.1.7.1:
- Model evaluation support for Azure Applications. The Evaluate a model in an Azure WebApi section in the Wiki provides detailed steps.
- The Extended Eval interface adds support for evaluation of RNN model.
- Adding
forceDeterministicAlgorithms=true
to the configuration will force use of deterministic algorithms if possible. This flag will force use of only a single thread for MKL and OMP operations. - IMPORTANT! The determinism changes require a new version of the CNTK Custom MKL (v2). For binary downloads this is included in the package. If you build CNTK from sources please follow the installation instructions described in the Wiki for Windows or Linux.
We have made the following performance related improvements:
- GPU prefetch with pinned memory is implemented for the new readers (HTKDeserializers, CNTKTextFormat and Image reader)
- Type optimizations in the image reader that decrease memory pressure
- Optimized logging—most bulk logging output now require
traceLevel=1
-
ParallelTrain.numGradientBits
can now change over epochs
You will find the following fixes in this release:
- Fix for packed sequences
- Correctness for Sequence2Sequence
- Automated minibatch scaling
- Optimized
lstm
from cudnn5.1 - Automatic minibatch-sizing no longer affects accuracy
- Improved performance for certain kinds of recurrent networks (
PastValue()
) -
fanout
inglorot
initialization is now correct - Fix for dimension error in cudnn RNN wrapper
-
fsAdagrad
denominator is now aggregated correctly -
LSTMBlock{}
andStabilizerLayer{}
no longer create parameters from insideapply()
- Improved default for ```BatchNormalizationLayer{}`` time constant
Starting from v.1.5 you can use the preview of Python API. See CNTK v.1.5 Release Notes for further instructions.