The blog post The Unreasonable Effectiveness of Recurrent Neural Networks (http://karpathy.github.io/2015/05/21/rnn-effectiveness/ ) describes a fascinating set of examples of RNN character-level language models, including the following:
- Shakespeare text generation similar to this example
- Wikipedia text generation similar to this example, but based on different training text
- Algebraic geometry (LaTex) text generation similar to this example, but based on different training text
- Linux source code text generation similar to this example, but based on different training text
- Baby names text generation similar to this example, but based on different training text