This is a precise to-read list for recurrent neural network (RNN).
Omit the long author lists; start with year, followed by title, jounral, and link. Newer papers go first. There are additional resources such as codes, interesting blog posts, cool articles, etc.
Maintainer: [Johnny Ho] (https://github.com/johnny5550822)
This repository originally is decided for keeping track of resources related to recurrent neural network. I hope everyone can contribue to it and make it better! So, please submit pull requests! For any questions, contact me ([email protected])
This repository is created in order to follow preciseness and neatness. There is another repository that also provide excellent (with full author lists, etc) sources for recurrent neural network, please visit [awesome-rnn] (https://github.com/kjw0612/awesome-rnn).
- [Software Package] (#software-package)
- [Sample Codes] (#sample-codes)
- [Blogs] (#blogs)
- [Review] (#review)
- [Tutorial] (#tutorial)
- [Language modeling] (#language-modeling)
- Translation
- Image Generation
- Hand-writing
- Text Generation
- [Questions and Answers] (#questions-and-answers)
- Cell Type
- Other
- python, [neon]
- python, [chainer]
- torch, [oxnn]
- torch, [Element-research]
- Deep Learning in general
- torch, [dp], a torch deep learning library. I think the example folder is the most useful, for example, CNN implementation there
- torch, [char-rnn]
- torch, [learning_to_execute]
- torch, [Oxford practical 6]
- torch, [Spatial Transformer Layer]
- Deep Learing in general
- torch, [torch7 official tutorials]
- torch, [torch7 official demos], have a lot of good examples
- torch, [UCLA IPAM course on torch7]
- torch, [A simplified example on CNN]
- torch, [Kaggle CIFAR-10], codes for Kaggle CIFAR-10 competition (CNN)
- Lua In general
- Lua, [Learn Lua in 15 Minutes]
- GitXiv, summary of recent published git repositoris for research algorithm [link]
- torch7 blogs (some cool explanation and codes), [blog]
- Up-to-date DL news from notey [blog]
- What does DL think about your selfie? [blog]
- The Unreasonable Effectiveness of Recurrent Neural Networks. [blog]
- 2015 Deep Learning, Nature [paper]
- 2015, Recurrent Neural Networks Tutorial [link]
- 2015, understanding LSTM [links]
- 2003 A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach [link]
- 2016 Long Short-Term Memory-Networks for Machine Reading, arXiv [paper]
- 2015 Teaching Machines to Read and Comprehend, NIPS [paper]
- 2015 Character-Aware Neural Language Models, arXiv [paper]
- 2014 Sequence to Sequence Learning with Neural Networks, NIPS [paper]
- 2014 On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, arXiv [paper]
- 2014 Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, arXiv [paper]
- 2013 Recurrent Continuous Translation Models, EMNLP [paper]
- 2015 DRAW: A Recurrent Neural Network For Image Generation, arXiv [[paper](http://arxiv.org/abs/1502.04623]
- 2015 Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation, arXiv [paper]
- 2015 Generative Image Modeling Using Spatial LSTMs, arXiv [paper]
- 2014 Recurrent Models of Visual Attention, arXiv [paper]
- 2013 Generating Sequences With Recurrent Neural Networks, arXiv [paper]
- 2007 Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, NIPS [paper]
- 2011 Generating Text with Recurrent Neural Networks, ICML [paper]
- 2015 Ask Your Neurons: A Neural-based Approach to Answering Questions about Images [paper]
- 2015 VQA: Visual Question Answering [paper]
- 2015 Exploring Models and Data for Image Question Answering [paper]
- 2015 Are you talking to a machine? Dataset and methods for multilingual image question answering [paper]
- 2015 Teaching Machines to read and comprehand [paper]
- 2015 Ask me anything: dynamic memory networks for natural language processing [paper]
- 2014 Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv [paper]
- 1997 Long Short-Term Memory, Neural Computation [paper]
- 2016 Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations, arXiv [paper]