Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

v1.9.0

Compare
Choose a tag to compare
@afrozenator afrozenator released this 08 Sep 01:30

PRs accepted:
Cleaning up the code for gru/lstm as transition function for universal transformer. Thanks @MostafaDehghani !
Clipwrapper by @piotrmilos !
Corrected transformer spelling mistake - Thanks @jurasofish!
Fix to universal transformer update weights - Thanks @cbockman and @cyvius96 !
Common Voice problem fixes and refactoring - Thanks @tlatkowski !
Infer observation datatype and shape from the environment - Thanks @koz4k !

New Problems / Models:

  • Added a simple discrete autoencoder video model. Thanks @lukaszkaiser !
  • DistributedText2TextProblem, a base class for Text2TextProblem for large-datasets. Thanks @afrozenator!
  • Stanford Natural Language Inference problem added StanfordNLI in stanford_nli.py. Thanks @urvashik !
  • Text2TextRemotedir added for problems with a persistent remote directory. Thanks @rsepassi !
  • Add a separate binary for vocabulary file generation for subclasses of Text2TextProblem. Thanks @afrozenator!
  • Added support for non-deterministic ATARI modes and sticky keys. Thanks @mbz !
  • Pretraining schedule added to MultiProblem and reweighting losses. Thanks @urvashik !
  • SummarizeWikiPretrainSeqToSeq32k and Text2textElmo added.
  • AutoencoderResidualVAE added, thanks @lukaszkaiser !
  • Discriminator changes by @lukaszkaiser and @aidangomez
  • Allow scheduled sampling in basic video model, simplify default video modality. Thanks @lukaszkaiser !

Code Cleanups:

  • Use standard vocab naming and fixing translate data generation. Thanks @rsepassi !
  • Replaced manual ops w/ dot_product_attention in masked_local_attention_1d. Thanks @dustinvtran !
  • Eager tests! Thanks @dustinvtran !
  • Separate out a video/ directory in models/. Thanks @lukaszkaiser !
  • Speed up RL test - thanks @lukaszkaiser !

Bug Fixes:

  • Don't daisy-chain variables in Universal Transformer. Thanks @lukaszkaiser !
  • Corrections to mixing, dropout and sampling in autoencoders. Thanks @lukaszkaiser !
  • WSJ parsing only to use 1000 examples for building vocab.
  • Fixed scoring crash on empty targets. Thanks David Grangier!
  • Bug fix in transformer_vae.py

Enhancements to MTF, Video Models and much more!