This repository has been archived by the owner on Jul 7, 2023. It is now read-only.
Releases: tensorflow/tensor2tensor
Releases · tensorflow/tensor2tensor
v1.3.2
- Small improvements for attention vizualization in colab.
v1.3.1
v1.3.0
- WARNING: Checkpoints produced with old versions break with this new release due to new variable scoping
- Various changes make T2T models and problems compatible with the new TF Eager mode - we'll have more on that soon
tpu_trainer
becoming more fully featured- Internal refactoring moving towards more flexibility in specifying the Estimator
input_fn
andmodel_fn
v1.2.9
v1.2.8
- Batch norm should now work in T2T - fixed the custom variable getters
- Simplified
ImageModality
and removal ofSmallImageModality
- Simplified
ClassLabelModality
and removal ofClassLabel1DModality
- New modality with CTC loss
- New vanilla_gan model that's a good example of a simple GAN
- TPU advances: Xception, Resnet50, and Transformer verified to work, code path uses Experiment, usage doc for Cloud TPU alpha customers
- Various small fixes, improvements, features
v1.2.7: Merge pull request #396 from rsepassi/push
- Fixed data generators for translation tasks. Great thanks to @vince62s and @martinpopel for your PRs and reviews and all the help!
- Updated LSTM models and attention. Great thanks @kolloldas for the attention work and @epurdyf for pointing out initializer problems!
- Added some variations of the transformer model.
- Bug-fixes and cleanups.
v1.2.6
v1.2.5
- Various bug fixes, improvements, and additions
- Checkpoint Breaking Note: We'd like to have good defaults as well as have immutable hparams sets and so we have an updating naming scheme we're trying, starting with Transformer hparams.
transformer_base
is now an alias name that points to a versioned hparams set, i.e.transformer_base
now callstransformer_base_v2
. The previous version oftransformer_base
is nowtransformer_base_v1
(so if you have an old checkpoint, use hparams settransfomer_base_v1
). This way if you're just trying something out, you can usetransformer_base
and know that you have a set of up-to-date good defaults. If you want to maintain reproducibility across T2T versions, use one of the versioned names, e.g.transformer_base_v2
.
v1.2.4
v1.2.3
- Transformer now supports fast decoding! The decoding path used to recompute the entire sequence on each additional timestep, but now caches as it goes.
- We now support
SavedModel
exports - New more thorough documentation
- Travis builds for all PRs and commits for Python 2 and 3!
- The decoding flags for
t2t_decoder
have all been merged into a singleHParams
object that can be modified with the flag--decode_hparams
- Various feature additions, bug fixes, and improvements
- Note: Parameter checkpoints for the Transformer model may be broken because of a bug with variable sharing in
layer_norm