There are also certainly a lot of additions we can add to speed up or help with training including adding dropout or using batch normalization that I haven't gone into here. Also when dealing with variable length sequences, you may want to consider using a special token to denote the last character or element in your sequence.
As for applications, completely endless. And I think that is really what makes this field so exciting right now. There doesn't seem to be any limit to what is possible right now. You are not just limited to text first of all. You may want to feed in MIDI data to create a piece of algorithmic music. I've tried it with raw sound data and this even works, but it requires a lot of memory and at least 30k iterations to run before it sounds like anything. Or perhaps you might try some other unexpected text based information, such as encodings of image data like JPEG in base64. Or other compressed data formats. Or perhaps you are more adventurous and want to try using what you've learned here with the previous sessions to add recurrent layers to a traditional convolutional model.
If you're still here, then I'm really excited for you and to see what you'll create. By now, you've seen most of the major building blocks with neural networks. From here, you are only limited by the time it takes to train all of the interesting ideas you'll have. But there is still so much more to discover, and it's very likely that this entire course is already out of date, because this field just moves incredibly fast. In any case, the applications of these techniques are still fairly stagnant, so if you're here to see how your creative practice could grow with these techniques, then you should already have plenty to discover.
I'm very excited about how the field is moving. Often, it is very hard to find labels for a lot of data in a meaningful and consistent way. But there is a lot of interesting stuff starting to emerge in the unsupervised models. Those are the models that just take data in, and the computer reasons about it. And even more interesting is the combination of general purpose learning algorithms. That's really where reinforcement learning is starting to shine. But that's for another course, perhaps.
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. 2014. https://arxiv.org/abs/1406.2661
Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. 2014.
Alec Radford, Luke Metz, Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. 2015. https://arxiv.org/abs/1511.06434
Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. 2015. arxiv.org/abs/1506.05751
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther. Autoencoding beyond pixels using a learned similarity metric. 2015. https://arxiv.org/abs/1512.09300
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. Adversarially Learned Inference. 2016. https://arxiv.org/abs/1606.00704
Ilya Sutskever, James Martens, and Geoffrey Hinton. Generating Text with Recurrent Neural Networks, ICML 2011.
A. Graves. Generating sequences with recurrent neural networks. In Arxiv preprint, arXiv:1308.0850, 2013.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013.
J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12, 2014.
I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 1017–1024, New York, NY, USA, June 2011. ACM.