text
stringlengths
41
31.4k
<s>the current input and w is the weight, multiplied with the previous output [19]. Figure 2 shows the structure of training the models with GRU based RNN. --------- (2) Figure 2: Architecture of training the models with GRU based RNNBut RNN also has a problem remembering the effect of the earlier layers in a long sequence, which means that even if the values of the parameters of the early layers change dramatically, the effect on the output is shallow, and this problem of RNN is studied as vanishing gradient problem. Generally, two gating mechanism is used to solve the vanishing gradient problem; LSTM (Long Short Term Memory) and GRU (Gated Recurrent Unit) which allows us to solve the vanishing gradient problem. But we have used GRU in this study because it uses two gates such as update gate and reset gate where LSTM uses three gates: input, forget, and output [16]. Again, LSTM maintains an internal memory to remember the effect of the earlier layers where GRU doesn't need any extra memory, which makes it easier to implement and also requires less time to train the dataset. Hence, it makes GRU more efficient for a medium length sequence data than LSTM. In GRU, update gate and reset gate are vectors that help to decide which information will pass through, and also they are trained to remember data from long ago without removing data that are irrelevant for prediction [17]. There are two mathematical notations, equation (3) and equation (4) of update gate and reset gate, which determines how much past data needed to be remembered and how much past data required to be forgotten [17]. In equation (3), is the update gate which is calculated for time step t where is it’s weight and is the information of previous t-1 unit, which is multiplied by . Again, is the reset gate expressed in equation (4) which is calculated for time step t where is it’s weight and is the information of previous t-1 unit which is multiplied by . --------- (3) --------- (4) As shown in Figure 2:, we used GRU based RNN to train our previously created five datasets (Uni-gram to 5-gram) and built five corresponding models and Figure 3 shows the structure of the trained models which have five hidden layers named embedding_1 (Embedding), gru_1 (GRU), gru_2 (GRU), dense_1 (Dense) and dense_2 (Dense) with total parameters of 1,681,721 (params). Figure 3: Structure of layers in our training process 1) Word Prediction After training all five datasets (Uni-gram to 5-gram), we now have 5 trained models for different length inputs. These models take different length word sequences as input and determine a single output, which is the next most likely word that should follow the input word sequence. Now, if the input word sequence length is one then it will be sent to train Uni-gram model as the model will take only one-word input and predict the most likely next word. Uniformly if the number of input words is</s>
<s>two then the inputted words should be sent to trained bi-gram model as it takes two input words and predicts an output word. Likewise, for the rest of the trained models. Figure 4 represents the word prediction process for different length input using 5 trained models. There is an exception if the length of the inputted word sequence is higher than five. In such cases, only the last five words would be used in the trained 5-gram model to predict the next word. Because, in general, the last 4 or 5 words is enough to establish the dependency of the sequence. Figure 4: Word prediction from the trained models 2) Sentence Prediction In our work, we not only predict the next most likely word but also suggest a full sentence from the given word sequence. To do that, we have used our previously mentioned architecture of N-gram trained model trained by GRU based RNN. When we have the input values, we can predict the next value from the given sequence and then add the output (predicted word) with the input. So that we can predict furthermore words from the newly updated input, which makes it a complete sentence eventually. This process should be continued until the end of a sentence is determined. In Bangla language, the end of a sentence is determined by the use of punctuation, “|” for the normal statement, and “?” for a questioning statement. So the model will keep predicting the word sequence until the end punctuation of the sentence is found. Thus, the total output should be the suggested possible sentence. IV. RESULT ANALYSIS To ratify the proposed approach, it’s essential to run the experiments and analyzed the outcome earnestly. Hence, we have appraised our proposed approach on a corpus dataset training the five different models having identical structures until 1000 epochs (Figure 3). Figure 5 and Figure 6 represents that, the trained Uni-gram model has an average accuracy of 32.17% and the average loss of 276.44% for our proposed approach, where the Bi-gram model has an average accuracy of 78.15% and the average loss of 53.36%. Again, Tri-gram has 95.84% accuracy on average and 8.52% loss on average for the same dataset used for Uni-gram and Bi-gram. Uniformly 4-gram and 5-gram show an average accuracy of 99.24% and 99.70% where they have an average loss of 2.04% and 1.11%, which indicates that the accuracy and loss level is improved according to the number of n is increased. Figure 5: Graphical Representation of Average Accuracy of Trained Models in Percentage against 1000 Epochs Figure 6: Graphical Representation of Average Loss of Trained Models in Percentage against 1000 Epochs We have also compared the experimental result of our proposed approach with other approaches proposed by the researchers in paper [1] & [2] and found that, paper [1] has an accuracy of 88.20%, and paper [2] has an accuracy of 63.5% on average for their proposed method, where we have a maximum efficiency of 95.84%-99.70% on average</s>
<s>for higher-order sequences. Figure 7 shows the comparison among the different approaches used in paper [1], paper [2] and this study. Figure 7: Comparison Chart of Average Accuracy V. CONCLUSION To predict the next most appropriate and suitable Bangla word (one or more) and sentence, GRU based RNN has shown a significant contribution to this research work. To justify the significance of using GRU based RNN, we have compared our proposed method with other methods that were used for Bangla and other languages by the researchers and got better accuracy among them (Figure 7). Although Uni-gram gives poor accuracy for our proposed work (32.17%), for higher-order sequences such as Tri-gram, 4-gram, and 5-gram, the accuracy rate is high (respectively 95.84%, 99.24%, and 99.70%). Again, the overall accuracy of this approach would be more impressive if we could use a larger dataset that we have already used in this work. Using Bangla corpus dataset was challenging as there is no readymade dataset for Bangla language, and we had to collect the dataset from different sources. In coming times, we will try to collect a large dataset to get better performance on GRU based RNN for Bangla next word and sentence prediction. Furthermore, this study will help as a tool for sustainable technologies in industry as its application is vast and it can be used in different sectors. VI. REFERENCES [1] P. P. Barman and A. Boruah, “A RNN based Approach for next word prediction in Assamese Phonetic Transcription,” Procedia Comput. Sci., vol. 143, pp. 117–123, 2018. [2] M. T. Habib, A. Al-Mamun, M. S. Rahman, S. M. T. Siddiquee, and F. Ahmed, “An Exploratory Approach to Find a Novel Metric Based Optimum Language Model for Automatic Bangla Word Prediction,” Int. J. Intell. Syst. Appl., vol. 10, no. 2, pp. 47–54, Feb. 2018. [3] “What is Word Prediction?” [Online]. Available: http://www2.edc.org/ncip/library/wp/What_is.htm. [Accessed: 03-Aug-2019]. [4] S. Bickel, P. Haider, and T. Scheffer, “Predicting sentences using N-gram language models,” in Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing - HLT ’05, 2005, pp. 193–200.R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press. [5] M. M. Haque, M. T. Habib, and M. M. Rahman, “AUTOMATED WORD PREDICTION IN BANGLA LANGUAGE USING STOCHASTIC LANGUAGE MODELS,” Int. J. Found. Comput. Sci. Technol., vol. 5, no. 6, 2015. [6] R. Makkar, M. Kaur, and D. V. Sharma, “Word Prediction Systems: A Survey,” Adv. Comput. Sci. Inf. Technol., vol. 2, no. 2, pp. 177–180. [7] “Prothom Alo | Latest online Bangla world news bd | Sports photo video live.” [Online]. Available: https://www.prothomalo.com/. [Accessed: 25-Aug-2019]. [8] “খবর, সববদেষ খবর, ব্রেক াং কিউজ, কবদেষণ - BBC News বাাংলা.” [Online]. Available: https://www.bbc.com/bengali. [Accessed: 25-Aug-2019]. [9] “Bangla Academy Sangkhipto Bangla Avidhan (Bengali to Bengali Dictionary) ~ Free Download Bangla Books, Bangla Magazine, Bengali PDF Books, New Bangla Books.” [Online]. Available: https://www.gobanglabooks.com/2017/08/bangla-academy-sangkhipo-bangla-avidhan.html. [Accessed: 25-Aug-2019]. [10] Dumbali, J & Nagaraja Rao, A. (2019). Real-time word prediction using N-grams model. International Journal</s>
<s>of Innovative Technology and Exploring Engineering. 8. 870-873. [11] Al-Mubaid, Hisham. (2007). A Learning-Classification Based Approach for Word Prediction. Int. Arab J. Inf. Technol. 4. 264-271. [12] Palazuelos-Cagigas, Sira & Martín, José & Macias-Guarasa, Javier & García-García, J.C. & Cavalieri, Daniel & Bastos, Teodiano & Sarcinelli-Filho, Mário. (2011). Machine learning methods for word prediction in Brasilian Portuguese. Assistive Technology Research Series. 29. 424-431. 10.3233/978-1-60750-814-4-424. [13] N. Ajithesh, “Artificial intelligence in word prediction systems,” Int. J. Adv. Res. Ideas Innov. Technol., vol. 4, no. 5, pp. 42–45, 2018. [14] “n-gram”, n.d. from https://en.wikipedia.org/wiki/N-gram. [Broder, Andrei Z.; Glassman, Steven C.; Manasse, Mark S.; Zweig, Geoffrey (1997). "Syntactic clustering of the web". Computer Networks and ISDN Systems. 29 (8): 1157–1166. [15] “Introduction to Language Models: n-gram”, n.d. from https://towardsdatascience.com/introduction-to-language-models-n-gram-e323081503d9. [CHAPTER DRAFT — | Stanford Lagunita. https://lagunita.stanford.edu/c4x/Engineering/CS-224N/asset/slp4.pdf] [16] “neural network - When to use GRU over LSTM? - Data Science Stack Exchange.” [Online]. Available: https://datascience.stackexchange.com/questions/14581/when-to-use-gru-over-lstm. [Accessed: 31-Jul-2019] [17] Understanding GRU Networks - Towards Data Science. (n.d.). Retrieved July 31, 2019, from https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be [18] “RPubs - Next Word Prediction using Katz Backoff Model - Part 2: N-gram model, Katz Backoff, and Good-Turing Discounting.” [Online]. Available: https://rpubs.com/leomak/TextPrediction_KBO_Katz_Good-Turing?fbclid=IwAR0GoO0PGB66CD6gtupJhhQ7i1LJwZGDXutD9eAow3B5ry9YQBbu0dGk_xc. [Accessed: 02-Aug-2019]. [19] “How Predictive Analysis Neural Networks Work.” [Online]. Available: https://www.dummies.com/programming/big-data/data-science/how-predictive-analysis-neural-networks-work/. [Accessed: 02-Aug-2019]. [20] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. [21] E. A. Emon, S. Rahman, J. Banarjee, A. K. Das and T. Mittra, "A Deep Learning Approach to Detect Abusive Bengali Text," 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, Malaysia, 2019. [22] M. M. Hossain, M. F. Labib, A. S. Rifat, A. K. Das and M. Mukta, "Auto-correction of English to Bengali Transliteration System using Levenshtein Distance," 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, Malaysia, 2019. [23] M. D. Drovo, M. Chowdhury, S. I. Uday and A. K. Das, "Named Entity Recognition in Bengali Text Using Merged Hidden Markov Model and Rule Base Approach," 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, Malaysia, 2019. [24] E. Biswas and A. K. Das, "Symptom-Based Disease Detection System In Bengali Using Convolution Neural Network," 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, Malaysia, 2019. [25] R. A. Tuhin, B. K. Paul, F. Nawrine, M. Akter and A. K. Das, "An Automated System of Sentiment Analysis from Bangla Text using Supervised Learning Techniques," 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019, pp. 360-364. [26] J. Islam, M. Mubassira, M. R. Islam and A. K. Das, "A Speech Recognition System for Bengali Language using Recurrent Neural Network," 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), Singapore, 2019, pp. 73-76.</s>