A brief overview of RNNs
RNN models can learn each token representation by rolling up the information of other 
tokens at an earlier timestep and learn sentence representation at the last timestep. This 
mechanism has been found beneficial in many ways, outlined as follows:
• Firstly, RNN can be redesigned in a one-to-many model for language generation or 
music generation.
• Secondly, many-to-one models can be used for text classification or sentiment 
analysis.
• And lastly, many-to-many models are used for NER problems. The second use 
of many-to-many models is to solve encoder-decoder problems such as machine 
translation, question answering, and text summarization.