Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

Releases: tensorflow/tensor2tensor

v1.3.2

04 Dec 02:29
d9f807c
Compare
Choose a tag to compare
  • Small improvements for attention vizualization in colab.

v1.3.1

02 Dec 00:05
970dac9
Compare
Choose a tag to compare
  • Improvements for TF Eager compatibility

v1.3.0

29 Nov 22:42
e3cd447
Compare
Choose a tag to compare
  • WARNING: Checkpoints produced with old versions break with this new release due to new variable scoping
  • Various changes make T2T models and problems compatible with the new TF Eager mode - we'll have more on that soon
  • tpu_trainer becoming more fully featured
  • Internal refactoring moving towards more flexibility in specifying the Estimator input_fn and model_fn

v1.2.9

14 Nov 01:50
5233253
Compare
Choose a tag to compare
  • Quick bug fixes

v1.2.8

11 Nov 02:14
75b75f2
Compare
Choose a tag to compare
  • Batch norm should now work in T2T - fixed the custom variable getters
  • Simplified ImageModality and removal of SmallImageModality
  • Simplified ClassLabelModality and removal of ClassLabel1DModality
  • New modality with CTC loss
  • New vanilla_gan model that's a good example of a simple GAN
  • TPU advances: Xception, Resnet50, and Transformer verified to work, code path uses Experiment, usage doc for Cloud TPU alpha customers
  • Various small fixes, improvements, features

v1.2.7: Merge pull request #396 from rsepassi/push

03 Nov 03:27
097ea5f
Compare
Choose a tag to compare
  • Fixed data generators for translation tasks. Great thanks to @vince62s and @martinpopel for your PRs and reviews and all the help!
  • Updated LSTM models and attention. Great thanks @kolloldas for the attention work and @epurdyf for pointing out initializer problems!
  • Added some variations of the transformer model.
  • Bug-fixes and cleanups.

v1.2.6

27 Oct 00:41
Compare
Choose a tag to compare
  • Refactored Translate problems courtesy of @vince62s
  • Fast beam search decoding for the Transformer model (set --decode_hparams='use_last_position=True' with the t2t-decoder to use it)
  • Various improvements and bug fixes

v1.2.5

16 Oct 18:43
Compare
Choose a tag to compare
  • Various bug fixes, improvements, and additions
  • Checkpoint Breaking Note: We'd like to have good defaults as well as have immutable hparams sets and so we have an updating naming scheme we're trying, starting with Transformer hparams. transformer_base is now an alias name that points to a versioned hparams set, i.e. transformer_base now calls transformer_base_v2. The previous version of transformer_base is now transformer_base_v1 (so if you have an old checkpoint, use hparams set transfomer_base_v1). This way if you're just trying something out, you can use transformer_base and know that you have a set of up-to-date good defaults. If you want to maintain reproducibility across T2T versions, use one of the versioned names, e.g. transformer_base_v2.

v1.2.4

30 Sep 17:48
Compare
Choose a tag to compare
  • Various cleanups, fixes, and feature additions - see commit history
  • More robust Travis CI tests

v1.2.3

22 Sep 20:19
Compare
Choose a tag to compare
  • Transformer now supports fast decoding! The decoding path used to recompute the entire sequence on each additional timestep, but now caches as it goes.
  • We now support SavedModel exports
  • New more thorough documentation
  • Travis builds for all PRs and commits for Python 2 and 3!
  • The decoding flags for t2t_decoder have all been merged into a single HParams object that can be modified with the flag --decode_hparams
  • Various feature additions, bug fixes, and improvements
  • Note: Parameter checkpoints for the Transformer model may be broken because of a bug with variable sharing in layer_norm