text
stringlengths
41
31.4k
<s>experience first, then grammatical rules is analogous to our ‘NMT (or SMT) followed byrule-based’ case. Besides, some of our survey participants also prefer to translate by applying rulesfirst, then experience, which is analogous to our ‘rule-based followed by NMT’ approach. We showa mapping between different types of human translation approaches (considered in our survey) andour proposed translation approaches in Table 5.1. Next, we present the results obtained from ourHuman translation approach Our proposed translation approachOnly experience NMT (or SMT)Only rules Rule-basedExperience and rules in any order Not applicableRules first, then experience Rule-based followed by NMT (or SMT)Experience first, then rules NMT (or SMT) followed by rule-basedEither experience or rules Either NMT (or SMT) or rule-basedOthers (specify) Not applicableTable 5.1: Mapping between human translation approaches and our proposed translation approachessurvey in Figure 5.2. Therefore, we find that this survey result supports our obtained experimentalresults since our results also imply that ‘NMT (or SMT) followed by rule-based’ approach is the besttranslation approach in machine translation.CHAPTER 5. ANALOGY TO HUMAN BEHAVIOUR: A CASUAL CROSS CHECKING TOOUR PROPOSED METHODS AND THEIR RESULTS 91Figure 5.2: Results of survey participants’ responsesChapter 6Avenues for Further ImprovementsPeople from various backgrounds thrive for learning multiple languages equal effectively for sustainingin the era of technology and communication. Translators offer great help to accomplish such alaborious task. However, very basic and primary sentence building rules of any language usuallyconsist of a good number of exceptions. Keeping track of wide varieties of such possible cases is oneof the most challenging tasks of a translation system, even the most intelligent living beings are notany exception to it. For example, let us consider these two sentences:1 “The complex houses married and single soldiers and their families.” This is what is called agarden path sentence. Though grammatically correct, the readers initial interpretation of thesentence may be nonsensical. Here, ‘complex’ may be interpreted as an adjective and ‘houses’may be interpreted as a noun. Readers are immediately confused upon reading that ‘the complexhouses married’, interpreting ‘married’ as the verb. How can houses get married? In actuality,‘complex’ is the noun, ‘houses’ is the verb, and ‘married’ is the adjective. The sentence is tryingto express the following: “Single soldiers, as well as married soldiers and their families, residein the complex.”2 “All the faith he had had had had no effect on the outcome of his life.” This sentence is anexample of lexical ambiguity. As strange as this sentence might sound, it is actually grammat-ically correct. The sentence relies on a double use of the past perfect. The two instances of‘had had’ play different grammatical roles in the sentences. The first one is a modifier while thesecond one is the main verb of the sentence.Because of the presence of such ambiguities in all languages, even the most sophisticated softwareCHAPTER 6. AVENUES FOR FURTHER IMPROVEMENTS 93cannot substitute the skill of a professional translator. Besides, the reasons why machine translationsare not as satisfactory as human translation are many. One of the reasons that translators cannotreplace professional human translators is the same reason</s>
<s>that plain old bilingual laypeople cannotreplace professional human translators for many tasks. For most of the translation jobs, the taskrequires more than just knowledge of two languages. Translators can not be walking dictionaries.They need to recreate language by crafting beautiful phrases and sentences to make them have thesame impact as the source. Often, they may need to devise brand new ways of saying things bytranslation, and to do so, they must draw upon a lifetime’s worth of knowledge derived from living intwo cultures. However, machine translators cannot exactly do that. Considering all these apparentlyunavoidable limitations of a machine translator, we must also accept that machine translation is nowvital to the top industries around the world, and one of the most promising fields in research sector.However, this topic is very little explored for low-resource languages such as Bengali, which opens upscope for large varieties of possible future work in this research area.6.1 Future WorkThings are changing fast in the world of translation technology. As each year passes, improvements incomputational capacity, AI, and data analysis expand in terms of both speed and accuracy of machinetranslation. Although previous forms of machine translation were completely rule-based (RBMT) orphrase-based (PBMT), NMT makes the translation process look less like a computer, and more likea human. However, the road to replace human translators with NMT may be a long one. We discusssome of our possible future work as follows:• Although NMT is a huge success in translation industry, it does not perform equally well forall languages. Different languages can vary from each other to a great extent in terms of wordembeddings, inferences, etc. We plan to create an efficient word embedding module for Bengalisoon.• Although we achieve improvement over classical NMT in terms of performance scores, it comesat a cost of higher resource (time and memory) overheads than NMT. Therefore, we plan tooptimize the resource overheads required for our proposed approaches.• We plan to explore other possible modes of blending such as phrase-based blending, trainedCHAPTER 6. AVENUES FOR FURTHER IMPROVEMENTS 94blending, morphological blending, etc., in future.• Efficient AI techniques, indexing and searching mechanisms will improve the total system thatmay result in more accurate output. We plan to devise a more efficient algorithm for tokentagging and searching words in vocabulary.• One of the main challenges in Bengali to English text conversion remains in implementing its vastgrammatical set of rules. If we can track more core rules to overshadow the ambiguous grammarsthen the translation task will be simpler and compact. Therefore, we plan to standardize andoptimize the set of implemented rules for Bengali in our rule-based translator.• Building a parallel corpus for Bengali-English sentence pairs is one of the most demandingtasks for translation of Bengali sentences using NMT. While other high-resource languages haveavailable parallel corpus containing millions of sentence pairs, there is no such corpus for Bengalieven containing thousand sentences which drastically degrades the translation performance ofBengali sentences using NMT.• There are lots of research opportunities in language processing sector. Since languages keepevolving continuously, we need to find a</s>
<s>way to update new grammatical rules. Machine Learningusing Statistical MT can be one way. We plan to investigate integration of statistical languagemodel with our rule-based model for future improvement.• Role of prepositions in a sentence can be ambiguous. Therefore, another idea of our future workis to extend the preposition handling component. Besides, adding more postpositional wordsand inflectional suffixes would improve the system’s translation performance.• Developing Opennlp tools for parts-of-speech tagging of Bengali words in a sentence efficientlyis one of the most crucial and less explored tasks in Bengali to English translation. Currently,there is an efficient Opennlp tool for parts-of-speech tagging of words in English sentences. Weaim to extend our work on developing Opennlp tools for Bengali language which will definitelycreate a landmark in Bengali language processing.• Finding applications of WordNet in different areas of NLP. We plan to develop a WordNet forBengali in future.Chapter 7ConclusionMillions of immigrants thrive for working knowledge on popular non-native languages such as English,as this creates many opportunities in international communities. Translators can offer a great helpto accomplish such a laborious task. On the other hand, in case of machine translation, NMT hasemerged as the most promising approach in recent years. NMT mostly outperforms all other previoustranslation technologies. Google Translator, one of the most popular and widely available translators,also uses NMT approach for translating from one language to another. However, NMT-based systemsperform poorly for translating low-resource languages such as Bengali, Arabic, etc. Therefore, theimportance of an efficient translator for such languages is noteworthy.Bengali, being one of the most popular and widely-spoken languages worldwide, remains littleexplored in some crucial areas of machine translation research. Existing research studies in thisregard mostly focus on English to Bengali translation, as only a handful studies have been performedon translating from Bengali to English. Besides, although some of the existing studies focus on rule-based translation for translating from Bengali to English, these studies lack in processing Bengaliwords semantically from various aspects such as finding stems of different forms of Bengali verbs,processing unknown words, etc. Moreover, to the best of our knowledge, none of the studies existing inthe literature focuses on integration between rule-based translator and data-driven machine translatorssuch as NMT, SMT, etc. Accordingly, we focus on all these yet to be focused aspects in our study.In our study, we make our contribution from three perspectives. First, we develop and implementa new rule-based translator from the scratch, which covers several basic grammatical rules for Bengalito English translation. Our rule-based translator adopts new methodologies for stemming of BengaliCHAPTER 7. CONCLUSION 96verbs and processing unknown words. Second, we separately incorporate two popular data-drivenmachine translation approaches (NMT and SMT). Finally, we explore different possible approaches forblending these two translation schemes (rule-based translation and data-driven machine translation).We also evaluate performance of each of the blending approaches in terms of standard translationperformance metrics.As revealed in our study, a number of critical issues always make natural language processing andtranslation tasks more complex. For a rule-based translator, there remain a number of exceptions thatviolate the standard rules of grammar, which</s>
<s>are quite tough to tackle by implementing any numberof rules [57]. Hence, the efficiency of a rule-based translator in translating languages with complexgrammatical structures is very low. On the other side of the coin, translations generated by data-driven machine translator can be unreliable, offensively wrong, or utterly unintelligible sometimes[10]. Besides, such machine translation systems have a steeper learning curve with respect to theamount of training data, resulting in worse quality in low-resource settings. Thus, the performance ofa rule-based translator is constrained by the number of incorporated rules whereas the performanceof a data-driven translator is constrained by the amount of data fed to it for learning or training. Inreality, it is very difficult to ensure sufficiency either in terms of the number of rules or in terms of theamount of data. Accordingly, neither of the two different types of approaches can suffice all alone.Considering these realistic aspects, we explore different approaches of blending between rule-basedtranslator and data-driven machine translator (NMT and SMT) to investigate whether and how asynergy between these translators can be attained. Here, we mainly focus how the different typesof translators can work in combination rather than in isolation. Our study leads to some promisingoutcomes as two of our blending approaches outperform both NMT and SMT in isolation going muchbeyond the rule-based translator. In addition to exploring the blending approaches, we also investigatehow our rule-based translator (for translating from Bengali to English) can be made more efficient inisolation.While conducting our study, we have found that it is extremely difficult (if not impossible) to geta large parallel corpus for Bengali to English translation. Accordingly, we plan to work on buildingsuch a corpus in future. Besides, we will also focus on improving the neural network level architectureused in the NMT considering specific aspects of translating from Bengali to English. Besides, wealso plan to limit resource usage required for our blending purpose. In addition to that, we planto explore other possible modes of blending such as phrase-based blending, trained blending, etc.,CHAPTER 7. CONCLUSION 97in future. Finally, exploring our proposed blending approaches for other language pairs remains yetanother future work of this study.Bibliography[1] O. Bozar, C. Federmann, M. Fishell, Y. Graham, B. Haddow, and M. Huck, “Findings of the2018 Conference on Machine Translation (WMT18)”, in Proceedings of the Third Conference onMachine Translation (WMT), vol. 2, pp. 272-303, ACL, 2018.[2] S. Bal, S. Mohanta, L. Mondal, and R. Parekh, “Bilingual Machine Translation: English toBengali”, in Proceedings of the International Ethical Hacking Conference, pp. 247-259, Springer,2018.[3] M. Rahman, M. F. Kabir, and M. N. Huda, “A Corpus Based N-gram Hybrid Approach of Bengalito English Machine Translation”, in Proceedings of the International Conference on Computer andInformation Technology (ICCIT), pp. 1-6, IEEE, 2018.[4] M. Roy, “A Semi-supervised Approach to Bengali-English Phrase-Based Statistical Machine Trans-lation”, in Proceedings of the Canadian Conference on Artificial Intelligence, pp. 291-294, Springer,2009.[5] R. Gangadharaiah, R. D. Brown, and J. G. Carbonell., “Phrasal equivalence classes for generalizedcorpus based machine translation”, in Proceedings of the International Conference on IntelligentText Processing and Computational Linguistics, pp. 13</s>
<s>- 28, Springer, 2011.[6] J. D. Kim, R. D. Brown, and J. G. Carbonell, “Chunk-Based EBMT”, in Proceedings of the 14thAnnual Conference of European Association for Machine Translation, pp. 1-8, MT Archive, 2010.[7] S. Dasgupta, A. Wasif, and S. Azam, “An Optimal Way Towards Machine Translation from Englishto Bengali”, in Proceedings of the 7th International Conference on Computer and InformationTechnology (ICCIT), pp. 648-653, IEEE, 2004.BIBLIOGRAPHY 99[8] M. K. Rahman and N. Tarannum, “A Rule Based Approach for Implementation of Bangla toEnglish Translation”, in Proceedings of the International Conference on Advanced ComputerScience Applications and Technologies (ACSAT), pp. 13-18, IEEE, 2012.[9] Y. Wu, M. Schuster, Z. Chen, and Q. V. Le, “Google’s Neural Machine Translation System:Bridging the Gap between Human and Machine Translation”, in Proceedings of the ComputingResearch Repository, pp.1-23, arXiv, 2016.[10] P. Koehn and R. Knowles, “Six Challenges for Neural Machine Translation”, in Proceedings ofthe Computing Research Repository, pp. 1-12, arXiv, 2017.[11] M. Artetxe, G. Labaka, E. Agirre, and K. Cho, “Unsupervised Neural Machine Translation”,in Proceedings of the International Conference on Learning Representations (ICLR), pp. 1-12,OpenReview, 2018.[12] G. Lample, A. Conneau, L. Denoyer, and M. Ranzato, “Unsupervised machine translation usingmonolingual corpora only”, in Proceedings of the International Conference on Learning Represen-tations (ICLR), pp. 1-14, OpenReview, 2018.[13] A. Haque, A. Islam, and A. B. M. A. A. Islam, “An Approach Towards Multilingual TranslationBy Semantic-Based Verb Identification And Root Word Analysis”, in Proceedings of the 5thInternational Conference on Networking, Systems and Security (NSysS), pp. 1-9, IEEE, 2018.[14] A. Islam, A. Haque, and A. B. M. A. A. Islam, “Polyglot: An approach towards reliable transla-tion by name identification and memory optimization using semantic analysis”, in Proceedings ofthe 4th International Conference on Networking, Systems and Security (NSysS), pp. 1-8, IEEE,2017.[15] M. Islam and A. B. M. A .A. Islam, “Polygot: Going Beyond Database Driven And Syntax-basedTranslation”, in Proceedings of the 7th Annual Symposium on Computing for Development, pp.28-31, ACM, 2016.[16] A. Klementiev, A. Irvine, C. CallisonBurch, and D. Yarowsky, “Towards statistical machinetranslation without paralel corpora”, in Proceedings of the 13th Conference of European Chapterof the Association for Computational Linguistics (EACL), pp. 130-140, ACL, 2012.BIBLIOGRAPHY 100[17] E. Ristad and P. Yianilos, “Learning String Edit Distance”, in Proceedings of the InternationalConference on Pattern Analysis and Machine Intelligence, vol. 20, pp. 522 - 532, IEEE, 1998.[18] R. Haldar and D. Mukhopadhyay, “Levenshtein Distance Technique in Dictionary Lookup Meth-ods: An Improved Approach”, in Proceedings of the Computing Research Repository, pp. 1-5,arXiv, 2011.[19] K. Papineni, S. Roukos, T. Ward, and W. Zhu, “BLEU: a Method for Automatic Evaluation ofMachine Translation”, in Proceedings of the 40th Annual Meeting of the Association for Compu-tational Linguistics (ACL), pp. 311-318, ACL, 2002.[20] S. Banerjee and A. Lavie, “METEOR: An Automatic Metric for MT Evaluation with ImprovedCorrelation with Human Judgements”, in Proceedings of the ACL workshop, pp. 65-72, ACL,2005.[21] M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhou, “A Study of Translation Edit Ratewith Targeted Human Annotation”, in Proceedings of the Association for Machine Translation inthe Americas, pp. 223-231, ACL, 2006.[22] S. Dandapat,</s>
<s>S. Sarkar, and A. Basu, “Automatic parts-of-speech tagging for Bengali: an ap-proach for morphologically rich languages in a poor resource scenario”, in Proceeding of the 45thAnnual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pp. 221-224, ACL,2007.[23] S. A. Chowdhury, “Developing a Bangla to English Machine Translation System Using Parts OfSpeech Tagging: A Review, Journal of Modern Science and Technology, vol. 1. no. 1., pp. 113-119,JMST, 2013.[24] M. H. Haque, M. F. Hossain, and A. F. Hossain, “Machine and Web Translator for English toBangla using Natural Language Processing”, Daffodil International University Journal Of Science& Technology, vol. 5, no. 1, pp. 53-61, DIUJST, 2010.[25] H. Khoshnoudi, “Investigating the Quality of the Translations of Quran through EquivalenceTheory: A Religious Lexicology of the Word Roshd”, International Journal of English Language& Translation Studies, vol. 7, no. 3, pp. 19-24, ELTS Journal, 2019.BIBLIOGRAPHY 101[26] S. K. Borhan, M. Hossain, and K. Biswas, “Bangla to English Text Conversion using opennlpTools”, Daffodil International University Journal Of Science & Technology, vol. 8, no. 1, pp. 37-42,DIUJST, 2013.[27] G. Foster, C. Goutte, and R. Kuhn, “Discriminative instance weighting for domain adaptationin statistical machine translation”, in Proceedings of the 2010 Conference on Empirical Methodsin Natural Language Processing, pp. 451-459, ACM, 2010.[28] D. Saha, S. K. Naskar, S. Bandyopadhyay, “A Semantics-based English-Bengali EBMT Sys-tem for translating News Headlines”, in Proceedings of MT Summit-X, pp. 125-133, Asia-PacificAssociation for Machine Translation, 2005.[29] M. D. Huq, “Semantic values in Translating from English to Bangla”, Dhaka University Journalof Linguistics, vol. 1, no. 2, pp. 45-66, DUJL, 2009.[30] G. Doddington, “Automatic Evaluation of Machine Translation Quality Using N-gram CoOc-currence Statistics”, in Proceedings of the 2nd International Conference on Human LanguageTechnology Research, pp. 138-145, ACM, 2002.[31] S. K. Naskar and S. Bandyopadhyay, “A Phrasal EBMT System for Translating English toBengali”, in Proceedings of the International Conference on Language, Artificial Intelligence, andComputer Science for Natural Language Processing Applications (LAICS–NLP), pp. 372-379,ArXiv, 2005.[32] M. M. Anwar, M. Z. Anwar, and M. A. Bhuiyan, “Syntax Analysis and Machine Translation ofBangla Sentences”, International Journal of Computer Science and Network Security, vol. 09, no.08, pp. 317–326, IJCSNS, 2009.[33] D. Saha, S. K. Naskar, and S. Bandyopadhyay, “A Semantics-based English-Bengali EBMTSystem for translating News Headlines”, in Proceedings of the 10th International MT Xummit,pp. 125-133, Asia-pacific Association for Machine Translation (AAMT), 2005.[34] R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neuralnetworks with multitask learning”, in Proceedings of the 25th International Conference on Machinelearning (ICML ’08), pp. 160-167, ACM, 2008.BIBLIOGRAPHY 102[35] M. Peters, M. Neumann, M. Lyyer, M. Gardner, and L. Zettlemoyer, “Deep Contextualized WordRepresentations”, in Proceedings of the 2018 Conference of the North American Chapter of theAssociation for Computational Linguistics (NAACL), pp. 2227-2237, ACL, 2018.[36] K. Kann, K. Cho, and S. Bowman, “Towards Realistic Practices In Low-Resource Natural Lan-guage Processing: The Development Set”, in Proceedings of the 2019 Conference on EmpiricalMethods in Natural Language Processing (EMNLP), pp. 1-8, ACL, 2019.[37] M. Mumin, M. Seddiqui, M. Iqbal, and M. Islam, “Shu-torjoma: An English<->Bangla StatisticalMachine Translation System”, Journal of</s>
<s>Computer Science, vol. 15, no. 7, pp. 1022-1039, SciencePublications, 2019.[38] G. Haffari, M. Roy, and A. Sarkar, “Active learning for statistical phrase-based machine trans-lation”, in Proceedings of Human Language Technologies: The 2009 Annual Conference of theNorth American Chapter of the Association for Computational Linguistics, pp. 415-423, ACM,2009.[39] D. Mimmo, H. M. Wallach, J. Naradowsky, D. Smith, and A. McCallum, “Poly-lingual topicmodels”, in Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP ’09), pp. 880-889, ACL, 2009.[40] E. Alfonseca, M. Ciaramita, and K. Hall, “Gazpacho and summer rash: lexical relationships fromtemporal patterns of web search queries”, in Proceedings of the Conference on Empirical Methodsin Natural Language Processing, pp. 1046–1055, ACL and AFNLP, 2009.[41] J. Carbonell, S. Klein, D. Miller, M. Steinbaum, T. Grassiany, and J. Frey, “Context-BasedMachine Translation”, in Proceedings of the 7th Conference of the Association for Machine Trans-lation in the Americas, pp. 19-28, AMTA, 2006.[42] F. Och and H. Ney, “The alignment template approach to statistical machine translation”, Jour-nal of Computational Linguistics, vol. 30, no. 4, pp. 417-449, MIT Press, 2004.[43] F. Och and H. Ney, “A systematic comparison of various statistical alignment models”, Journalof Computational Linguistics, vol. 29, no. 1, pp. 19-51, MIT Press, 2003.BIBLIOGRAPHY 103[44] P. Koehn, F. Och, and D. Marcu, “Statistical phrase-based translation”, In Proceedings of theConference of the North American Chapter of the Association for Computational Linguistics onHuman Language Technology (NAACL ’03), pp. 48-54, ACL, 2003.[45] Google Translate, https://translate.google.com, last accessed on July 15, 2019.[46] World Population Clock: 7.7 Billion People (2019) - Worldometers, www.worldometers.info, lastaccessed on March 31, 2019.[47] Ethnologue, https://www.ethnologue.com/guides/how-many-languages, last accessed on June19, 2019.[48] Importance of learning english essay, https://friedpapers.com/essay/importance-of-learning-english-essay, last accessed on June 21, 2019.[49] OpenNLP, www.maxnet.sourceforge.net, last accessed on March 13, 2019.[50] NMT with Tensorflow, https://github.com/tensorflow/nmt, last accessed on June 15, 2019.[51] Zipf’s Law and Heap’s Law, www.ccs.neu.edu., last accessed on June 30, 2019.[52] Kaggle, https://www.kaggle.com/zusmani/the-holy-quran, last accessed on June 30, 2019.[53] Prothom Alo, https://www.prothomalo.com, last accessed on June 30, 2019.[54] Subtitles, https://www.subscene.com, last accessed on July 2, 2019.[55] SUST website, https://www.sust.edu/d/cse/research, last accessed on July 10, 2019.[56] History and rule-based system, https://www.inf.ed.ac.uk/teaching/courses/mt/lectures/history.pdf,last accessed on September 9, 2019.[57] Ambiguous Grammar, https://www.thoughtco.com/syntactic-ambiguity-grammar-1692179, lastaccessed on September 9, 2019.[58] English Idioms, https://www.ef.com/wwen/english-resources/english-idioms, last accessed onSeptember 9, 2019.[59] Moses, http://www.statmt.org/moses, last accessed on October 15, 2019.BIBLIOGRAPHY 104[60] 3 reasons why neural machine translation is a breakthrough, https://slator.com/technology/3-reasons-why-neural-machine-translation-is-a-breakthrough, last accessed on September 10, 2019.[61] GlobalVoices, http://opus.nlpl.eu/GlobalVoices.php, last accessed on September 10, 2019.</s>
<s>Paper Title (use style: paper title)A Novel Pair-wise Language Detection Approach using Convolutional Neural Network Specifically Targetting Bangla and EnglishXXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEEAbstract—Language detection is an essential pre-processing step in the implementations of many multilingual document-processing solutions, such as Optical Character Recognition (OCR) and machine translation. Specifically, language detection research for Bangla is very rare, with only a handful of solutions ever reported in the literature. In this paper, we present a novel, lightweight, small footprint convolutional neural network, which detects Bangla and English languages—directly from scanned mixed-language document images. The proposed model achieves 99.98% recognition accuracy for this specific two-language classification problem.Keywords—Language Detection, Bangla, English, CNN, Pattern Recognition.IntroductionDetecting the language of a mixed-language document is a very important pre-processing step in automated document processing tasks. One such processing task is Optical Character Recognition (OCR). Another classic example is machine translation where detecting the language is an essential pre-requisite. Humans are great at detecting languages, but for a machine, the solution is non-trivial.There is another large difference between the two tasks mentioned above. In machine translation tasks, the texts are, in almost all the cases, already in electronic format. An example is the widely used Google Translate page (https://translate.google.com.bd), where the language is detected automatically. In OCR tasks, the source is in image format, and no such electronic transcription is available, making the task harder. The problem here is that the cause-and-effect is circular—we can detect the language better if we can run an OCR first, but to use the correct OCR, we need to know what the language is first. Although this paper primarily focuses on the latter problem—applying language detection specifically within the OCR domain, it can also be used within a machine translation domain, albeit with some minor modifications. Nowadays, OCR systems have been developed for most languages. For OCR, the first step is to perform layout analysis, followed by segmentation (paragraph segmentation, word segmentation, character segmentation, etc.), for which language detection is an essential pre-requisite. Specifically, within the domain of Bangla language documents, the mixture of Bangla and English is not uncommon at all as seen in Fig 1, making the task of Bangla document processing comparatively harder. Bangla is one of the most widely used languages in the world, spoken as the first language by 160 million people. For another 20 million speakers, it is used as a second language [1]. With over 2 billion people speaking and using English in their daily lives [2], matured English OCR solutions are widely available—both for academic research and commercial applications. Sadly that is not true for Bangla. There are commercial solutions such as Google Lens and Google Tesseract [3] that work on multiple languages, including English and Bangla. Still, the accuracy of these tools is not good enough to be used in serious Bangla OCR applications. To develop a commercially-viable Bangla OCR, we must address the elephant in the room and solve the language detection problem first.In this paper, we proposed a novel Convolutional Neural Network (CNN) model</s>
<s>to identify Bangla and English languages from mixed-language scanned documents.Fig. 1: Bangla and English mixed in a primarily Bangla documentReview There is not much reported work for language detection in scanned document images. We found a couple of works, including a paper from Dr. N. Jayanthi et al. [4]. They proposed a convolutional neural network solution for language detection on Hindi, Tamil, and English handwritten characters. Their training dataset consisted of 39,000 handwritten images of characters from three languages, written in varying font-weight, angles, and coverage. Eighty percent of characters were used for training, and the other 20% was used for testing the model accuracy of character classification. They achieved training accuracy of 78% in 40 learning epochs and 74% for the testing accuracy.In Oaknorth [5], the authors use a CNN model to detect language and focus on 3 languages—German, English, and Italian. They collected book samples and cropped them randomly into 150 x 150 pixels resulting in a corpus of 2,090 images for English, 1,910 images for German, and 1,680 images for Italian; the total number of samples was 5,680. They split the corpus into two parts—80% for training and 20% for testing. They reported running their model for 100 epochs, with early stopping and report a 97% accuracy on the test set within 34 epochs.Another widely used solution is the language detection library within Google Tesseract. Tesseract is used at page-level documents, after converting into text and then using a library; it predicts languages and returns the most used language in the document. Although, we did not find that much work with scanned documents, but we did find additional research with speech and audio to detect languages. Revay et al. [6] used audio spectrograms images to train a CNN. Their audio file duration was 3.75 seconds, and they used six different languages: English, Spanish, French, German, Russian, and Italian. They used 5,000 clips per language for the training set and 2,000 clips for the validation set; in total, 60,000 samples were used in this reported research. They report an accuracy of 97% on binary language classification. For multiclass classification with six languages, the report an accuracy of 89%.C. Bartz et al. [7] captured spatial information from audio snippets using a CNN through a sequence of time steps used by a Recurrent Neural Network (RNN). They use two data sets, one is the European Speech dataset [8], and the other is the YouTube News dataset, such as official BBC News [9]. The split their dataset into three parts: a training set, a validation set, and a testing set in the ratio of 70%, 20%, and 10% respectively. In the European Speech dataset, 19,000 training samples are available of 53 hours of speech audio, and the YouTube News dataset has 194,000 training sample of 540 hours of speech audio. For the YouTube dataset, they achieved a 90% accuracy for the proposed vanilla CNN model and 98% for the Convolutional Recurrent Neural Network (CRNN) model, and in the European Speech dataset,</s>
<s>their vanilla CNN model and CRNN model achieved accuracies of 90% and 91% respectively. DatasetTo train our model, we built our own dataset. We used two publicly available datasets for Bangla and English handwritten words. CMATERdb [10] word dataset is used for handwritten Bangla word, and IAMonDo-database [11] is used for English handwriting words. For printed words, we collected our own dataset. We collected English pages from an open-source Kaggle repository [12] and created a Bangla word document with different fonts and sizes. Details are presented in the following sections.Dataset PreprocessingIAMonDo database has 115,320 word images, and the CMATERdb dataset has 17,079 word images. English printed dataset has 14,025 word images. To build our printed Bangla corpus, we decided to use almost the same number of images as above, which is 15,000. Our dataset has 15,001 and 15,002 images of Bangla, and the English handwritten words, respectively. We also have 13,394 and 14,025 scanned word images, respectively, for Bangla and English printed documents. We merged both handwritten and printed datasets and built one combined dataset, which has 28,395 Bangla words and 29,027 English words. Table 1 shows a summary of the dataset.Summary of the Datasets Language Type Original Size Processed Size Merged Size Bangla Printed 13,394 13,394 28,395 Handwritten 17,079 15,001 English Printed 14,025 15,002 29,027 Handwritten 115,320 14,025 Train-Test-Validation SplitWe split our dataset into three parts after random shuffling. We separated 13% of data for validation set, and the rest 83% was assigned for training and testing the model. Later, we separated the dataset again, where we put 23% on the test and rest for the training set. In summary, we used 13% for cross-validation, 20% for the testing, and 73% for the training set from the whole dataset. Fig 2 shows the frequency distributions for each set.Fig. 2 Dataset distributionData AugmentationHaving a large dataset is crucial for the performance of a deep learning model [13]. Through a few workarounds, we can improve the performance of the model by augmenting the data we already have. We implemented some data augmentation techniques in our dataset to increase the size, which includes:Rotation image 5 degrees,Normalize and rescaling the image using min-max normalizer,Shear image 10%,Zoom image 20%,Shifting the weight 10%,Shifting the height of 10% Fig 3 and Fig 4 show some examples of applied data augmentation for Bangla and English datasets, respectively.Work MethodologyAs stated already, language detection is a crucial part of developing an OCR system. A multilingual OCR system needs to identify the language so that it can apply the language-specific model for character recognition.Fig. 3 Augmented samples for Bangla wordsProposed ModelConvolution, the basic building block of Convolutional Neural Network (CNN) [14], is a mathematical combination of two functions that merges two sets of information to produce a third function. The convolution is performed on the input data using a filter (kernel) to produce a feature map. CNN can minimize the number of parameters to solve complex image recognition tasks.The proposed model has a CNN for the classification of language identification,</s>
<s>which has a two-class output: English and Bangla. This model used convolution, max pooling layer, fully connected dense layer, and regularization methods such as batch normalization and dropout, as seen in Fig 5. In the model architecture, we have one convolutional layer in the first block, which is also an input layer with kernel size 3, and the filter size is 64. In this convolutional layer, the input image width is 100, and the height is 150 using the activation function ReLU [15]. The second block is a Batch Normalization [16] layer, with momentum set to default. It is connected with a max-pooling layer with a pool size of 2, followed by a 25% dropout layer. Then the output is flattened to an array and passed through a fully connected dense layer of 256 hidden units, with ReLU activation and regularized with another batch normalization layer followed by 50% dropouts layer and passed through a fully connected dense layer of 2 nodes with ‘SoftMax’ [17] activation. This final layer is out of the output layer.Fig. 4 Augmented samples for English wordsFig. 5 Proposed CNN modelTo minimize the error of the convolutional algorithms, optimization algorithms are heavily utilized. Our proposed model used Adam [18] optimizer with a learning rate of 0.001. Adam optimization algorithm is often used to update network weights iteratively in training data, which is an extension to a stochastic gradient descent algorithm. To calculate the error for optimizing algorithms, we used categorical cross-entropy function as it performs better than others [19][20].Epochs and EarlyStopAn issue with training neural networks is choosing how many training epochs to use. Too many epochs can result in the training dataset being overfitted, while too few can result in the model being underfitted. The EarlyStop feature has different metrics or arguments; one can adjust to set up when the training process stops. We set our initial epochs to 25, while EarlyStop function monitored our test loss and stopped the training when needed. With this tool, our model stopped at 17 epochs, based on the model loss.PerformaceOur proposed method achieves very good results for the train, test, and validation sets.Learning CurveA learning curve is a plot of output model learning over time and experience. Learning curves are a commonly used machine learning testing method for algorithms, which incrementally learn from a training dataset. After each update during training, plots of the calculated results can be generated to display learning curves. The model can be tested on the training dataset and a holdout validation data set. Fig 6 shows our learning curve for train test accuracy and loss.Fig. 6 Learning curveOn this learning curve, we can see that our model has no underfitting or overfitting issue. Over time, the model stabilized while becoming more accurate.Accuracy and LossAfter 17 epochs, our model reached a maximum test accuracy of 99.98%, with a minimum test loss of 0.0054. For the training set, the maximum accuracy is 99.99%, with minimum loss of 1.2912e-04.Confusion MatrixA confusion matrix is a table often</s>
<s>used to define a classification model (or “classifier”) output on a collection of test data for which the actual values are known. We observed the confusion matrices for all the datasets. The confusion matrices for the train, test, and validation set are presented in Tables II, III, and IV. Confusion Matrix for Train Setn = 38,414 Predicted Bangla Predicted Bangla Actual Bangla 19,018 19,018 Actual English 19,303 19,396 19,111 19,303Confusion Matrix for Test Setn = 11,475 Predicted Bangla Predicted Bangla Actual Bangla 5,652 5,652 Actual English 5,786 5,823 5,689 5,786Confusion Matrix for Validation Setn = 7,532 Predicted Bangla Predicted Bangla Actual Bangla 3,725 3,725 Actual English 3,788 3,807 3,744 3,788Conclusion and Future WorkIn this paper, we presented an effective model to identify Bangla and English in a mixed-language document. At the word level, we achieved a 99.98% test accuracy. The work has primarily focused on Bangla and English language detection within the context of an OCR processing pipeline. But as mentioned before, the solution can also be used in other NLP and / or electronic or image processing tasks such as machine language, automated billboard reading, number plate detection and other problems. Also, it should be pointed out that although we chose to focus on Bangla/English pair, the solution should be extendable to any pair of languages with sufficient labelled data to retrain our models. In that sense, this is a solution for any pair-wise language detection.Another important point to mention is the lack of comparison of our work with previous research. We were unable to get the datasets that the authors of some previous research mentioned before in this paper, which meant we were unable to compare our solution directly to theirs. In addition, we were also not able to reproduce some of these aforementioned researches, as there is not enough information to implement these algorithms and test on our corpus. Specifically, language detection focusing on Bangla/English pair, where we concentrated our work, has not been published elsewhere, which makes our work entirely novel.In summary, we have used a large word-level dataset to train our model with excellent performance. Since our dataset is now only at the word level, our clear next step is to move the model to the page level and the character level. In addition, in the proposed model, we can only detect two languages, but in the future, we plan to adapt this to multiple-language detection.Acknowledgment The authors would like to acknowledge the encouragement and funding from the “Enhancement of Bangla Language in ICT through Research & Development (EBLICT)” project, under the Ministry of ICT, the Government of Bangladesh.References MustGo.com. 2020. Bengali Language - Dialects & Structure - Mustgo. [online] Available at: <https://www.mustgo.com/worldlanguages/bengali/> [Accessed 29 May 2020].En.wikipedia.org. 2020. English-Speaking World. [online] Available at: <https://en.wikipedia.org/wiki/English-speaking_world> [Accessed 29 May 2020].opensource.google. 2020. Projects – Opensource.Google. [online] Available at: <https://opensource.google/projects/tesseract> [Accessed 29 May 2020].Jayanthi, N., Harsha, H., Jain, N., & Dhingra, I. S. (2020, February). Language Detection of Text Document Image. In 2020 7th International Conference on Signal Processing</s>
<s>and Integrated Networks (SPIN) (pp. 647-653). IEEE.OakNorth. 2020. Oaknorth Bank | Page Not Found - Oaknorth. [online] Available at: <https://www.oaknorth.com/cnn-blog/> [Accessed 29 May 2020].Revay, S., & Teschke, M. (2019). Multiclass language identification using deep learning on spectral images of audio signals. arXiv preprint arXiv:1905.04348.Bartz, C., Herold, T., Yang, H., & Meinel, C. (2017, November). Language identification using deep convolutional recurrent neural networks. In International Conference on Neural Information Processing (pp. 880-889). Springer, Cham.Speech Repository - European Commission. 2020. Speech Repository. [online] Available at: <https://webgate.ec.europa.eu/sr/> [Accessed 28 May 2020].YouTube. 2020. BBC News. [online] Available at: <https://www.youtube.com/user/bbcnews> [Accessed 28 May 2020].Indermühle, E., Liwicki, M., & Bunke, H. (2010, June). IAMonDo-database: an online handwritten document database with non-uniform contents. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems (pp. 97-104)Sarkar, R., Das, N., Basu, S., Kundu, M., Nasipuri, M., & Basu, D. K. (2012). CMATERdb1: a database of unconstrained handwritten Bangla and Bangla–English mixed script document image. International Journal on Document Analysis and Recognition (IJDAR), 15(1), 71-83.Kaggle.com. 2020. Denoising Dirty Documents | Kaggle. [online] Available at: <https://www.kaggle.com/c/denoising-dirty-documents/data> [Accessed 29 May 2020].Dvornik, N., Mairal, J., & Schmid, C. (2019). On the importance of visual context for data augmentation in scene understanding. IEEE Transactions on Pattern Analysis and Machine IntelligenceAlbawi, Saad & Abed MOHAMMED, Tareq & ALZAWI, Saad. (2017). Understanding of a Convolutional Neural Network.10.1109/ICEngTechnol.2017.8308186.Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.Goodfellow, I., Bengio, Y., and Courville, A. (2016).Deep Learning. Page 184. MIT Press.http://www.deeplearningbook.org.Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.Golik, P., Doetsch, P., & Ney, H. (2013, August). Cross-entropy vs. squared error training: a theoretical and experimental comparison. In Interspeech (Vol. 13, pp. 1756-1760).Mannor, S., Peleg, D., & Rubinstein, R. (2005, August). The cross-entropy method for classification. In Proceedings of the 22nd international conference on Machine learning (pp. 561-568).image2.jpegimage3.pngimage4.pngimage5.pngimage6.pngimage1.png</s>
<s>HAND-WRITTEN BANGLA CHARACTER RECOGNITION USING DEEP CONVOLUTIONAL NEURAL NETWORK Dipayan Bhadra Master of Science in Electrical and Electronic Engineering Department ofElectrical and Electronic Engineering Bangladesh University of Engineering and Technology January 2019 ACKNOWLEDGEMENTS I would like to convey my sincerest appreciations and heartfelt gratitude to my thesis supervisor, Professor Dr. S.M. Mahbubur Rahman, for his all-out support, help, inspiration and able guidance throughout the tenure of his supervision. His plentiful resources and pragmatic ideas in the field of computer vision have allowed me to work with exhaustive datasets as well asstate-of-the-art theories and techniques in this field. He has always been a great mentor for a novice like me, and his appreciation and constructive criticism in every stage of this thesis have only bolstered and instilled the confidence in me to successfully complete the dissertation. I am truly owed to have the experience of doing research work under his supervision. I am grateful to Professor Dr. Md. Shafiqul Islam, Head of the Department of Electrical and Electronic Engineering, BUET for rendering me support and departmental assistance during my M.Sc. study in BUET. I would also like to thank all my teachers, colleagues and friends for their support and encouragement during the work. I am indebted to my parents for their mental supportand encouragement. And finally, I would like to thank the Almighty, the Lord of the Worlds, for blessing me with so many respected personalities around me and this auspicious achievement, in particular. Dipayan Bhadra January, 2019 Contents Articles Page number Abstract i List of Figures ii List of Tables iv List of Abbreviations v 1. Introduction 1 1.1.Introduction 1 1.2.Handwritten Character Recognition 3 1.2.1. Application of Offline Handwritten Recognition 3 1.2.2. Background of HCR Systems 4 1.2.3. Challenges 6 1.3.Problem Identification 7 1.4.Related Works 9 1.5.Motivation and Scope of Works 11 1.6.Objectives 11 1.7.Outline 12 2. Convolutional Neural Network: A Review 13 2.1.Introduction 13 2.2.Neural Networks 13 2.2.1. Activation Functions 17 2.2.2. Softmax 18 2.2.3. Loss Function 19 2.2.4. Back-propagation 19 2.2.5. Gradient Descent 20 2.2.6. Momentum 21 2.2.7. Nesterov’s Accelerated Gradient 21 2.2.8. Weight Decay 21 Contents Articles Page number 2.2.9. Local Response Normalization 21 2.2.10. Xavier Initialization 22 2.3. Convolutional Neural Networks 22 2.3.1. Typical CNN Structure 26 2.3.2. Layers of CNNs 28 2.4.Advantage of CNN over NN 33 2.5.Conclusion 34 3. Proposed DCNN Architectures 35 3.1. Introduction 35 3.2. Proposed DCNN Architecture 35 3.2.1. Model 1 35 3.2.2. Model 2 37 3.2.3. Model 3 39 3.2.4. Model 4 (Proposed DCNN) 41 3.2.5. Model 5 44 3.3. Conclusion 49 4. Experimental Results 50 4.1. Introduction 50 4.2. Experimental Platform 50 4.3. Database 51 4.3.1. Database CMATERdb 51 4.3.2. Database BBCD 52 4.3.3. Combined Database 53 4.4. Training of DCNN 54 4.5. Performance Evaluation 59 4.6. Conclusion 66 Contents Articles Page number 5. Conclusion 67 5.1. Summary of The Work 67 5.2. Future Scope 68 References 69 Appendix A 73 Abstract In recent years, there has been much interest in automatic character recognition. Between handwritten and printed forms,</s>
<s>Handwritten Character Recognition (HCR) is more challenging. A handwritten character written by different persons is not identical but varies in both size and shape. Numerous variations in writing styles of individual character make the recognition task difficult. The similarities in distinct character shapes, the overlaps, and the inter-connections of the neighboring characters further complicate the problem. Recently, the Convolutional Neural Network (CNN) has been shown noticable success in the area of image-based recognition, video analytics, and natural language processing due to their unique characteristics of feature extraction and classification. This is mainly due to the fact that the design of a CNN is motivated by the close imitation of visual mechanism as compared to the conventional neural network. The convolution layer in a CNN performs the similar filtering function that is seen in the cells of visual cortex. As a result of replication of weight configuration of one layer to the local neighboring receptive field in the previous layer through the convolution operation, the features extracted by the CNN possess the invariance properties of scale, rotation, translation and other distortions of a pattern. A recently reported HCR technique that considers the Bangla characters uses shallow CNN by considering only two-level convolution layers and a fixed kernel size experimented on a small-size private dataset. In this thesis, a Deep CNN with three convolutional layers with different kernel sizes in different convolutional layers is used on a large dataset made of combining two datasets. Experimental result shows an accuracy in recognition that is 7% higher than that of previous work. List of Figures Figure No Name of Figure Page No 1.1 Categories of character recognition system 02 1.2 Different steps in character recognition system 06 1.3 General challenges in image recognition problems 1.4 Different zones of Bangla characters 11 2.1 An artificial neural network 17 2.2 Training of neural networks 18 2.3 Illustration of a biological neuron and its mathematical model 2.4 A neural network consisting of input, hidden and output layer 2.5 Placement of the activation function in the neural network model 2.6 Visual comparison of the three most relevant DNNs’ activation functions: hyperbolic tangent, sigmoid and rectifier 2.7 A schematic diagram of model LGN and cortex. The model visual cortex is composed of 48×48 model cortical neurons, which have separate dendritic fields. The model LGN is given as four sheets of different cell types. Each sheet is composed of 24×24 model LGN cells, whose receptive field centers are arranged retinotopically 2.8 Vision algorithm pipe line 32 2.9 Typical block diagram of a CNN iii Figure No Name of Figure Page No 2.10 A representation of convolution process 36 2.11 A representation of max pooling and average pooling 2.12 A representation of ReLU functionality 39 2.13 The hyperbolic tangent function 40 2.14 Absolute of hyperbolic tangent function 40 2.15 The sigmoid function 40 2.16 Representation of tanh processing 40 2.17 Processing of a fully connected layer 41 3.1 Proposed deep CNN architecture for Bangla HCR 4.1 Training and validation accuracy curves</s>
<s>versus number of epoch 4.2 Cost function versus number of epoch 54 4.3 Learning rate versus number of epoch 54 4.4 Input images in database and the same images after normalization 4.5 Sample kernels of the first convolution layer 55 4.6 Feature maps after the first convolution layer 56 4.7 Sample kernels of the second convolution layer 56 4.8 Feature maps after the second convolution layer 57 4.9 Sample kernels of third convolution layer 57 4.10 Feature maps after third convolution layer 58 List of Tables Table No Name of Table Page No 1.1 Basic Bangla characters 10 3.1 Parameters setup for DCNN 46 4.1 Major library and packages used to implement the algorithm 4.2 Sample images of database CMATERDB 3.1.2 51 4.3 Sample images of database BBCD 52 4.4 Accuracy of the DCNN for BHCR 58 4.5 Confusion matrix produced for test dataset (15,859 samples) from DCNN of BHCR 4.6 Confusion matrix produced for training dataset (28,529 samples) from DCNN of BHCR 4.7 Confusion matrix produced for validation dataset (8,400 samples) from DCNN of BHCR 4.8 Experimental results showing comparison between proposed DCNN with some state-of-art methods of BHCR in terms of accuracy and variance on the test dataset of combined database 4.9 Comparison of reported test accuracies of some state-of-art methods with proposed DCNN on BHCR. A.1 Sample images of database CMATERDB 3.1.2 69 A.2 Sample images of database BBCD 74 B.1 Comparison between deep-CNN models 90 List of Abbreviations BBCD Bangla Basic Character Database BHCR Bangla Hand-written Character Recognition CDR Correct Detection Rate CMATER Center for Microprocessor Applications for Training Education and Research CNN Convolutional Neural Network DCNN Deep Convolutional Neural Network DNN Deep Neural Network GPU Graphic Processing Unit HCR Hand-written Character Recognition HMM Hidden Markov Model ICA Independent Component Analysis LDA Linear Discriminant Analysis LGN Lateral Geniculate Nucleus MICR Magnetic Ink Character Recognition MLP Multilayer Perceptron MQDF Modified Quadratic Discriminant Function NAG Nesterov’s Accelerated Gradient NN Neural Network OCR Optical Character Recognition PCA Principle Component Analysis ReLU Rectified Linear Unit ResNet Residual Network ROI Region of Interest SIFT Scale Invariant Feature Extraction SVM Support Vector Machine Chapter 1 Introduction 1.1. Introduction Optical character recognition (OCR) is the mechanical or electronic conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example from a television broadcast). It is widely used as a form of information entry from printed paper data records, whether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printouts of static-data, or any suitable documentation. It is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed on-line, and used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining. OCR is a field of research in pattern recognition, artificial intelligence and</s>
<s>computer vision. Character Recognition techniques associate a symbolic identity with the image of a character. Character recognition system is classified into two, based on data acquisition and text type: online and offline (Figure. 1.1). The online character recognition system utilizes the digitizer which directly capture writing with the order of the strokes, speed, pen up and pen down information. Offline character recognition captures the data from paper through optical scanner or cameras. Offline character recognition is also known as optical character recognition because the image of text is converted in to a bit pattern by optically digitizing devices. In case of online handwritten character recognition, the handwriting is captured and stored in digital form via different means. Usually, a special pen is used in conjunction with an electronic surface. As the pen moves across the surface, the two- dimensional coordinates of successive points are represented as a function of time https://en.wikipedia.org/wiki/Machinehttps://en.wikipedia.org/wiki/Electronicshttps://en.wikipedia.org/wiki/Imagehttps://en.wikipedia.org/wiki/Cognitive_computinghttps://en.wikipedia.org/wiki/Machine_translationhttps://en.wikipedia.org/wiki/Text-to-speechhttps://en.wikipedia.org/wiki/Text-to-speechhttps://en.wikipedia.org/wiki/Text_mininghttps://en.wikipedia.org/wiki/Pattern_recognitionhttps://en.wikipedia.org/wiki/Pattern_recognitionhttps://en.wikipedia.org/wiki/Artificial_intelligencehttps://en.wikipedia.org/wiki/Computer_visionChapter 1: Introduction Figure 1.1 Categories of character recognition system and are stored in order. It is generally accepted that the on-line method of recognizing handwritten text has achieved better results than its offline counterpart. This may be attributed to the fact that more information may be captured in the on-line case such as the direction, speed and the order of strokes of the handwriting. The offline character recognition can be further grouped into two types: • Magnetic Ink Character Recognition (MICR) • Optical Character Recognition (OCR) In MICR, the characters are printed with magnetic ink. The reading device can recognize the character according to the unique magnetic field of each character. MICR is mostly used in banks for check authentication. OCR deals with the recognition of characters acquired by optical means, typically a scanner or a camera. The characters are in the form of digital images and can be either printed or handwritten, of any size, shape or orientation. The OCR can be subdivided into handwritten character recognition and printed character recognition. Handwritten character recognition is more difficult to implement than printed character recognition due to diverse human handwriting styles and customs. In printed character recognition, the images to be processed are in the forms of standard fonts like Times New Roman, Arial and Courier etc. Character Recognition (CR)Offline CROptical Character Recognition (OCR)Printed Character RecognitionFixed FontMulti FontOmni FontHand-written Character Recognition (HCR)Constraint UnconstraintMagnetic Ink CR (MICR)Online CROptical Chapter 1: Introduction 1.2. Handwritten Character Recognition 1.2.1. Application of Offline Handwritten Character Recognition HCR has been successfully used in several applications. Some of the important applications of offline handwritten recognition are discussed in the following section: • Bank Automaion: Offline handwritten recognition is basically used for cheque reading in banks. Cheque reading is the very important commercial application of offline handwritten recognition. Handwritten recognition system plays very important role in banks for signature verification and for recognition of amount filled by user. • Postal office automation: Handwritten recognition system can be used for reading the handwritten postal address on letters. Offline handwritten recognition system used for recognition handwritten digits of postcode. HCR</s>
<s>can be read this code and can sort mail automatically. • Form Processing: HCR can be also used for form processing. Forms are normally used for collecting the public information. Replies of public information can be handwritten in the space provided. • Signature Verification: HCR can also be used to identify the person by signature verification. Signature identification is the specific field of handwritten identification in which the writer is verified by some specific handwritten text. Handwritten recognition system can be used for identify the person by handwriting, because handwriting may be vary from person to person. 1.2.2. Background of HCR Systems HCR system is developed with an objective to recognize handwritten characters from a digital image of handwritten documents. An HCR system includes steps such as image acquisition, character segmentation, pre-processing of character image, feature extraction and recognition of character class with the extracted features as well as post processing. Chapter 1: Introduction a) Image acquisition Gray-level scanning of handwritten paper documents, at an appropriate resolution typically 300-1000 dpi. b) Preprocessing – Binarization (two-level thresholding). – Segmentation to isolate individual character. – Conversion to another character representation like skeleton or contour. c) Feature Extraction – Extracting meaningful features. d) Classification – Recognition using one or more classifier. e) Contextual verification on post processing Block diagram of a general character recognition system is shown in Figure 1.2. Images for HCR system might be acquired by scanning hand-written document or by capturing photograph of document or by directly writing in computer using stylus. This is also known as digitization process. Preprocessing involves series of operations performed to enhance to make it suitable for segmentation. Preprocessing step involves noise removal generated during document generation. Proper filter like mean filter, min-max filter and Gaussian filter may be applied to remove noise from document. Binarization process converts gray scale or colored image to black and white image. Binary morphological operations like opening, closing, thinning, hole filling etc may be applied to enhance image. If document is scanned then it may not be perfectly horizontally aligned, so we need to align it by performing slant angle correction. Input document may be resized if it is too large in size to reduce dimensions to improve speed of processing. However reducing dimension below certain level may remove some useful features too. Generally document is processed in hierarchical way. At first level lines are segmented using row histogram. From each row, words are extracted using column histogram and finally characters are extracted from words. Accuracy of final result is highly depends on accuracy of segmentation. Chapter 1: Introduction Figure 1.2: Different steps in character recognition system Feature extraction is the heart of any character recognition system. Feature extraction techniques like Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), Chain Code, Scale Invariant Feature Extraction (SIFT), zoning, gradient based features and histogram are applied to extract the features of individual characters. These features are used to train classification system. When a new input image is</s>
<s>presented to HCR system, its features are extracted and given as an input to the trained classifier like artificial neural network or support vector machine. Classifiers compare the input feature with stored pattern and find out the best matching class for input. A post processing, though not mandatory, improve the accuracy of recognition. Syntax and semantic analysis or similar higher level concepts might be applied to check the context of recognized character. Post ProcessingSyntax Analysis Semantic Analysis NLPClassificationEuclidian Distance ANN SVMFeature ExtractionBinary Features PCA, LDA etc Chain Code SIFT, Gabor etcSegmentationLine Segmentation Word Segmentation Character SegmentationPreprocessingNoice Removal Binarization Slant Angle Correction ResizeImage AcqusitionScanned Document PhotographChapter 1: Introduction 1.2.3. Challenges Since this task of recognizing character from an image is relatively trivial for a human to perform, it is worth considering the challenges involved from the perspective of a Computer Vision algorithm. An in-exhaustive list of general challenges in image recognition task is given below: • Viewpoint variation: A single instance of an object can be oriented in many ways with respect to the camera. Figure 1.3: General challenges in image recognition problems • Scale variation: Visual classes often exhibit variation in their size (size in the real world, not only in terms of their extent in the image). • Deformation: Many objects of interest are not rigid bodies and can be deformed in extreme ways. • Occlusion: The objects of interest can be occluded. Sometimes only a small portion of an object (as little as few pixels) could be visible. • Illumination conditions: The effects of illumination are drastic on the pixel level. • Background clutter: The objects of interest may blend into their environment, making them hard to identify. • Intra-class variation: The classes of interest can often be relatively broad, such as chair. There are many different types of these objects, each with their own appearance. Moreover, hand-written character recognition is more challenging than character recognition from printed form. In this particular case of hand-written character recognition Chapter 1: Introduction task from images the complexity/challenges of recognition task extends because of numerous variations in writing styles, character shapes & sizes of different persons and similarities in character shapes, the overlaps, and interconnections of neighboring characters. HCR complexity varies among different languages due to distinct shapes, strokes and number of characters. There are more characters in Bangla (50 characters) than in English (26 characters) and some contains additional sign up and/or below. Also compound characters are also used in Bangla frequently. Moreover, Bangla contains many similar shaped characters; in some cases a character differ from its similar one with a single dot or mark. These characteristics make difficult to achieve better performance with simple technique as well as hinders to work with Bangla HCR than English HCR. 1.3. Problem Identification Bangla character set is divided into two categories: basic and compound characters. Basic characters are the collection of vowels and consonants. Bangla character set has 11 vowels and 39 consonants (Table 3.1). In Bangla, there are a large number</s>
<s>of compound characters formed by combination of two or more basic characters. Most basic character shapes have a horizontal line at their upper parts, called headline or matra and three zones of such characters can be identified as shown in Fig. 3.1. Each of these characters (excepting the character ‘BINDU’) has a part in the middle zone while only a few of them have an additional part either in the upper or in the lower zones. Chapter 1: Introduction Table 1.1: Basic Bangla Characters. There are 11 vowels and 39 consonants in Bangla script. Vowels (11 nos.) অ আ ই ঈ উ ঊ ঋ এ ঐ ও ঔ A AA I II U UU R E AI O AU Consonants (39 nos.) ক খ গ ঘ ঙ চ ছ জ ঝ ঞ ট KA KHA GA GHA NGA CA CHA JA JHA NYA TTA ঠ ড ঢ ণ ত থ দ ধ ন প ফ TTHA DDA DDHNNA TA THA DA DHNA PA PHA ব ভ ম য র ল শ ষ স হ ড় BA BHA MA YY RA LA SHA SSA SA HA RRA ঢ় য় ৎ s t u DHRYYA KHAANUVISABINFigure 1.4: Different zones of Bangla Characters Chapter 1: Introduction 1.4. Related Works Offline handwriting recognition has been studied extensively during the last three decades [1–6]. Among these, recognition of isolated characters has the advantage that segmentation is usually not needed and when written in boxes, size normalization is accomplished to a large extent. So, experimental results on them provide a kind of upper bound of performance of the character recognizer in a handwriting analysis task. Numerous techniques have been proposed in the literature for recognition of isolated handwritten characters. These include (a) template matching [7, 8], e.g., direct pixel matching, deformable template matching, relaxation based matching, structural shape matching, etc.; (b) statistical classifier, e.g., Bayes’ classifier [9], hidden Markov model (HMM) [10, 11], etc.; (c) graph-based and automata-based syntactic classifier; (d) machine learning-based techniques involving neural net [12, 13], rough set [14], fuzzy set [15, 16], support vector machine (SVM) [17, 18], etc. Among these, the approaches based on HMM and SVM are popular due to their potential in recognition of unconstrained handwriting. Convolutional Neural Network [19] is also very efficient in document recognition tasks. In the overall recognition scheme, preprocessing techniques such as size normalization, smoothing, slant correction, etc., efficient feature selection and suitable post-processing methods that make use of contextual information for error correction play important roles to improve the final performance. Most of the reported studies on handwriting recognition have been done on English [18, 20, 21] and oriental scripts like Chinese [22, 23], Korean [10, 24] and Japanese [25, 26]. The reports on Indian scripts are a few only. In the earliest such study [27], stroke-based features and a tree classifier were used for classification of handwritten Devanagari numerals. Parui et al. [28] proposed a syntactic scheme for handwritten Bangla numeral recognition while Dutta and Chaudhuri [29] used a neural net classifier</s>
<s>to recognize isolated handwritten alphanumeric characters. Among others, Bhattacharya et al. [30] used self-organizing neural net while Bhattacharya and Chaudhuri [31] used classifier combination approach to recognition of handwritten Bangla numerals. A multistage recognition scheme for mixed numerals is reported recently [32]. For Bangla alphabetic characters, Rahman et al. [33] proposed a multistage scheme while Bhowmick et al. [34] used a neural network-based approach. HMM-based recognition of Bangla basic characters is reported in [35]. A major obstacle to effective research on off-line handwritten character recognition of Bangla and other Indian scripts is the non-existence of required benchmark databases. Previous studies were reported on the basis of small Chapter 1: Introduction databases collected in laboratory environments. However, several standard databases such as NIST, MNIST [19], CEDAR [36], CENPARMI, etc., are available for Latin script. Khosravi and Kabir [37] presented a large dataset of handwritten Farsi digits. An Arabic handwritten database consisting of words and texts written by 100 writers was described in [38]. Su et al. [39] presented a Chinese handwriting database HIT-MW collected in an unconstrained manner. A few other databases of handwriting samples include [40, 41] and [42]. A few notable works are available for Bengali handwritten character recognition. Bhowmik et al. [52] proposed a fusion classifier using Multilayer Perceptron (MLP), RBF network and SVM. They used wavelet transform for feature extraction from character images. In classification, they considered some similar characters as a single pattern and trained the classifier for 45 classes. Basu et al. [53] proposed a hierarchical approach to segment characters from words and MLP is used for classification. In segmentation stage they used three different feature extraction techniques but they reduced character patterns into 36 classes merging similar characters in a single class. Recently, Battacharya et al. [54] considered a two-stage recognition scheme for 50 basic character classes. Feature vector for the first classifier is computed by overlaying a rectangular grid consisting of regularly spaced horizontal and vertical lines over the character bounding box. The response of this first classifier is analyzed to identify its confusion between a pair of similar shaped characters. Second stage of classification is used to resolve the confusion and feature vector is computed by overlaying another rectangular grid but consisting of irregularly spaced horizontal and vertical lines over the character bounding box. They used Modified Quadratic Discriminant Function (MQDF) classifier and MLP as classifiers in first and second stages, respectively. Recently, Md. Mahbubar Rahman et al. [55] applied CNN scheme to Bengali HCR and reported 85.96% test accuracy. CNN with two convolution and sub-sample layers are used in this work. Kernel size considered in this work is 5×5. 6 and 12 kernels were used in 1st and 2nd convolution layer respectively to extract features. A database was created by taking samples from 30 individuals of different ages and education levels. prepared dataset size was 20000 having 400 samples for each character among which 17500 samples (350 samples for each character) were used as training set and 2500 samples (50 samples</s>
<s>per character) were used as test set. Chapter 1: Introduction 1.5. Motivation and Scope of Works Convolutional neural network (CNN) has ability to recognize visual patterns directly from pixel images with minimal preprocessing. Deep CNN (DCNN) [5] has been being used successfully for image classifications, handwritten digit and character recognition in recent years. But there is no record of DCNN being used for Bangla HCR (BHCR) task. For this reason, DCNN scheme will be investigated in BHCR task and performance will be analyzed. It can be assumed easily that DCNN based BHCR will give satisfactory results in terms of recognition accuracy, time requirement for recognition and storage requirements since after training, the training data will not be needed to be stored. Only the weights and biases of the network are stored which requires very negligible storage size. Training requires much time, but testing requires very small amount of time, so it can be applied in real-time recognition and analysis. 1.6. Objectives The specific objectives of this thesis are: • To develop an architecture of Deep CNN (DCNN) to recognize hand-written Bangla characters • To analyze the DCNN architecture and determine optimum number of convolutional layers and kernel-size that would provide improved recognition accuracy of fifty classes of hand-written Bangla characters. • To evaluate the performance of the proposed DCNN based recognition scheme with that of existing methods in terms of accuracy, storage requirement, and computational complexity on publicly available dataset The outcome of the thesis is a novel recognition scheme for hand-written Bangla characters with low-level storage requirement and processing time that would provide improved accuracy to facilitate automatic recognition. Chapter 1: Introduction 1.7. Outline The thesis is organized as follows: In Chapter 2, a brief review of neural network and convolutional neural network is introduced. Then the advantages of CNN over NN are explained. In Chapter 3, proposed DCNN architecture is explained. Chapter 4 describes the database used in the experiment, experimental results and analyses by comparing the proposed method with the existing recognition methods. Finally, Chapter 5 provides the conclusion along with the scopes for future work. Chapter 2 Convolutional Neural Network: A Review 2.1. Introduction This chapter provides a review on convolutional neural network. Since CNN is a category to neural network, hence at first a brief introduction of NN along with the structure and training method are explained. After that the basic structure of a CNN is presented. But as the training method of CNN is similar to the NN, so it is omitted. At the end of this chapter, the advantages of CNN over NN are presented. 2.2. Neural Networks A neural network is a system of interconnected artificial “neurons” that exchange messages between each other. The connections have numeric weights that are tuned during the training process, so that a properly trained network will respond correctly when presented with an image or pattern to recognize. The network consists of multiple layers of feature-detecting “neurons”. Each layer has many neurons that respond to different combinations</s>
<s>of inputs from the previous layers. As shown in Figure 2.1, the layers are built up so that the first layer detects a set of primitive patterns in the input, the second layer detects patterns of patterns, the third layer detects patterns of those patterns, and so on. Deep neural networks typically use 2 to 10 distinct layers for pattern recognition. Chapter 2: Convolutional Neural Network: A Review Figure 2.1: An artificial neural network Training of a NN is performed using a “labeled” dataset of inputs in a wide assortment of representative input patterns that are tagged with their intended output response. Training uses general-purpose methods to iteratively determine the weights for intermediate and final feature neurons. Figure 2.2 demonstrates the training process at a block level. Neural networks are inspired by biological neural systems. The basic computational unit of the brain is a neuron and they are connected with synapses. Figure 2.3 compares a biological neuron with a basic mathematical model. Adjust Weights Input Output Input Error Desired Neural Network Σ Figure 2.2: Training of Neural Networks Input Hidden Output Chapter 2: Convolutional Neural Network: A Review Figure 2.3: Illustration of a biological neuron (up) and its mathematical model (down). In a real animal neural system, a neuron is perceived to be receiving input signals from its dendrites and producing output signals along its axon. The axon branches out and connects via synapses to dendrites of other neurons. When the combination of input signals reaches some threshold condition among its input dendrites, the neuron is triggered and its activation is communicated to successor neurons. w1x1 ∑wixi+b Axon from a neuron output axon synapse dendrite Cell body activation function w2x2 f(∑wixi+b) Chapter 2: Convolutional Neural Network: A Review Figure 2.4: A neural network consisting of input, hidden and output layer. A neural network can contain an arbitrary number of hidden layers. Inputs of hidden layer and output layer are weighted by weights wij,ujk respectively. In the computational model of neural network, the signals that travel along the axons (e.g., x0) interact multiplicatively (e.g., w0x0) with the dendrites of the other neuron based on the synaptic strength at that synapse (e.g., w0). Synaptic weights are learnable and control the influence of one neuron or another. The dendrites carry the signal to the cell body, where they all are summed. If the final sum is above a specified threshold, the neuron fires, sending a spike along its axon. In the computational model, it is assumed that the precise timings of the firing do not matter and only the frequency of the firing communicates information. Based on the rate code interpretation, the firing rate of the neuron is modeled with an activation function f that represents the frequency of the spikes along the axon. A common choice of activation function is sigmoid. In summary, each neuron calculates the dot product of inputs and weights, adds the bias, and applies non-linearity as a trigger function (for example, following a sigmoid response function). The whole</s>
<s>network still Input Layer Hidden Layer Output Layer Chapter 2: Convolutional Neural Network: A Review Figure 2.5: Placement of the activation function in the neural network model. expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. 2.2.1. Activation Functions Output of each node is produced by the node’s activation function φ that takes weighted inputs of the node as parameters transformed by a transfer function (see Figure 2.5). The transfer function creates a linear combination of weighted inputs in order to feed them to the activation function. To approximate complicated functions, nonlinear activations are often used. The following sections briefly describe different nonlinear activation functions most commonly used in neural networks. Hyperbolic tangent One of the most popular activation functions is the hyperbolic tangent function (Equation 2.1). Input x is a weighted linear combination of the inputs of the node. This function works most effectively on inputs in range (0,1), producing outputs in interval (−1,1). (2.1) Chapter 2: Convolutional Neural Network: A Review 3 2 1 0 1 2 3 f(xHyperbolictangent Sigmoid ReLU Sigmoid Logistic sigmoid function (Equation 2.2) is widely used activation function biologically more plausible than hyperbolic tangent. One of the reasons the sigmoid function is broadly used is the fact, the sigmoid function is differentiable at every point. (2.2) ReLU Rectified linear unit’s function (Equation 2.3) is used with the purpose to increase non-linearity of the network. Rectifying neurons are considered to be biologically more plausible than logistic sigmoid or hyperbolic tangent neurons. They benefit from their simplicity, resulting in faster training and performance improvements in particular cases, and therefore often used in DNNs/CNNs. ReLU is given by the equation: f(x) = max(0,x) (2.3) Figure 2.6 visualizes a comparison of rectifier function and activation functions introduced in this section. Figure 2.6: Visual comparison of the three most relevant DNNs’ activation functions: hyperbolic tangent, sigmoid and rectifier. 2.2.2. Softmax The softmax activation function (Equation 2.4) is usually used in the last network layer, converting an arbitrary real value to posterior probability of the class ckin range (0,1): Chapter 2: Convolutional Neural Network: A Review (2.4) where m corresponds the number of output nodes (classes) and ak is the activation value of k-th node: given i-th node’s weights wij and the output of the previous layer hj(x). 2.2.3. Loss Function To measure a precision of the network outcome, a loss (also cost or objective) function [33] is used. It expresses how much the prediction differs from expected value. The output of the loss function is a real value referred to as the cost or the penalty. An example of a loss function that outputs probabilities, thus often used in visual classification problems is the cross-entropy loss function (Equation 2.6): (2.6) where m is the number of possible classes (nodes) in the output layer, y the target vector and p the aposterior probability for each class predicted by the network. Evaluated derivatives of a loss function are used in</s>
<s>the training phase. 2.2.4. Backpropagation Backpropagation is a neural network training algorithm. For supervised learning, target classes are essential for error calculation. The error is afterwards backpropagated to every node in previous layers. This error e (Equation 2.8) is obtained as a gradient of the loss function L with respect to each layer’s weights wkjgiven input of the node x and activation function (2.7) (2.8) (2.5) Chapter 2: Convolutional Neural Network: A Review Gradient computation demands application of the chain rule in order to compute partial derivative of the loss function L with respect to particular weight wkj. Using the error, weights are updated by an optimization algorithm such as gradient descent. 2.2.5. Gradient Descent The most common function optimization algorithm used for neural networks is the gradient descent, a first order approximation algorithm that updates weights of the model. The algorithm approaches a local minimum in the direction of the negative gradient of the loss function with respect to the weights. The size of the step is called learning rate. It is a scalar in the range (0,1), controlling magnitude of network’s parameters (weights) change. To perform one update of the weights, the whole training set has to be used. For large training sets, this method might be computationally expensive. A more time efficient gradient descent based optimization method is the stochastic gradient descent or SGD (Equation 2.9). SGD needs only one observation (or subset of the training set) to update model parameters w. As the name suggests, at each weight update a random observation is used. Furthermore, SGD does not tend to end up stuck in a local minima such as ordinary gradient descent (also called batch gradient descent). A disadvantage of SGD is a slower convergence rate than convergence rate of batch gradient descent. Due to its stochasticity, a wrong choice of starting observations may cause algorithm to move further from global minima and make converge problematic. w(t+1) = w(t) − η∇wL(w(t)) (2.9) Weights w are being updated by the negative of the gradient of the loss function with respect to the weights. This change is limited by the learning rate η. Root mean square prop or RMSprop is using the same concept of the exponentially weighted average of the gradients like gradient descent with momentum but the difference is the update of parameters. 𝑀𝑆(𝜔(𝜂)) = 𝛾𝑀𝑆(𝜔(𝜂 − 1)) + (1 − 𝛾) (𝜕𝐷(𝜂)𝜕𝜔(𝜂)𝜔(𝜂 + 1) = 𝜔(𝜂) −√𝑀𝑆(𝜔(𝜂))+∈𝜕𝐷(𝜂)𝜕𝜔(𝜂) (2.10) https://engmrk.com/gradient-descent-with-momentum/Chapter 2: Convolutional Neural Network: A Review 2.2.6. Momentum Numerous improvements for gradient descent were proposed. One of the most frequently used enhancements is the momentum. Momentum helps to prevent from convergence to a local minima and also speeds up the convergence process by preserving a fraction of previous weight adjustments. Previous weight adjustment is used in current update, multiplied by factor µ, the momentum (Equation 2.11). w(t+1) = w(t) − η∇wL(w(t)) + µ∆w(t) (2.11) 2.2.7.Nesterov’s Accelerated Gradient Nesterov’s accelerated gradient (NAG) is an optimal algorithm for smooth convex optimization proposed by Nesterov, with convergence rate of</s>
<s>O(1/t2) after t steps, compared to the one of gradient descent O(1/t). However, for visual problems, optimized functions are barely convex and smooth, thus assumptions under which convergence rate holds are not preserved. Novelty of NAG is in the weight update using gradient on the weights updated by momentum (Equation 2.12). ∆w(t+1) = µ∆w(t) − η∇wL(w(t) + µ∆w(t)) (2.12) 2.2.8. Weight Decay In the training phase, without regularization, weights use to grow to large values slowing down the convergence process. Weight decay (also called L2 regularization) is a way how to prevent weights from growing unboundedly (Equation 2.13). The weight decay parameter λ represents the portion of the weight to be subtracted. w(t+1) = w(t) − η∇wL(w(t)) − λw(t) (2.13) 2.2.9. Local Response Normalization Efficiency of a training process is sometimes enhanced by local response normalization(LRN). It is performed over local regions of an input image, centered around point xk (Equation 2.14). Region has size n and consists of points xi. (2.14) Chapter 2: Convolutional Neural Network: A Review α and β are arbitrary values specified before the training starts. 2.2.10. Xavier Initialization The background chapter has introduced issues with initialization of DNNs. If the initial weights are either too large or too small, model is unable to converge to the global minima. To face this problem, Xavier initialization is often used. Weights of the model are randomly initialized, usually taken from the Gaussian distribution with variance determined from (Equation 2.15): (2.15) where W stands for the random distribution of the node to be initialized. Size of the variance depends on number of input connections (nin) to the particular node. Alternative versions of Xavier initialization also exist. They often include the number of outgoing connections in the variance formula. 2.3. Convolutional Neural Networks (CNNs / ConvNets) A CNN is a special case of the neural network described above. A CNN consists of one or more convolutional layers, often with a subsampling layer, which are followed by one or more fully connected layers as in a standard neural network. The design of a CNN is motivated by the discovery of a visual mechanism, the visual cortex, in the brain (Figure 2.7). The visual cortex contains a lot of cells that are responsible for detecting light in small, overlapping sub-regions of the visual field, which are called receptive fields. These cells act as local filters over the input space, and the more complex cells have larger receptive fields. The convolution layer in a CNN performs the function that is performed by the cells in the visual cortex. Chapter 2: Convolutional Neural Network: A Review Figure 2.7: i) Visual Cortex of human brain Chapter 2: Convolutional Neural Network: A Review Figure 2.7: ii) A schematic diagram of model LGN and cortex. The model visual cortex is composed of 48×48 model cortical neurons, which have separate dendritic fields. The model LGN is given as four sheets of different cell types. Each sheet is composed of 24×24 model LGN cells, whose receptive field centers are</s>
<s>arranged retinotopically. A typical CNN is shown in Figure 2.9. Each feature of a layer receives inputs from a set of features located in a small neighborhood in the previous layer called a local receptive field. With local receptive fields, features can extract elementary visual features, such as oriented edges, end-points, corners, etc., which are then combined by the higher layers. In the traditional model of pattern/image recognition, a hand-designed feature extractor gathers relevant information from the input and eliminates irrelevant variabilities. The extractor is followed by a trainable classifier, a standard neural network that classifies feature vectors into classes. In a CNN, convolution layers play the role of feature extractor. But they are not hand designed. Convolution filter kernel weights are decided on as part of the training process. Convolutional layers are able to extract the local features because they restrict the receptive fields of the hidden layers to be local. For image classification, it is common to Chapter 2: Convolutional Neural Network: A Review use convolutional neural networks (CNNs) as they were designed to extract information from 2D and higher order input spaces. Convolutional neural networks, thanks to their multiple levels of feature extracting layers, use a minimum of preprocessing, hence it is not necessary to consider feature extraction issues. CNN’s weights are designed to form a convolutional filter that is replicated over the whole visual field. All units of the convolutional layer share the same weights within the layer, what decreases number of free parameters to learn, thus simplifies training process. The filter is used to convolve an image, each filter convolves pixels it covers. Outputs of all these filters form a feature map. Convolutional layers usually contain several feature maps for richer representation of the image content. Each feature map is produced by a different filter. Convolutional layer is typically defined by number of feature maps, kernel size (size of the filter) and by stride parameter (a size of the step over image pixels when applying filter). CNNs are used in variety of areas, including image and pattern recognition, speech recognition, natural language processing, and video analysis. There are several reasons that convolutional neural networks are becoming important: • In traditional models for pattern recognition, feature extractors are hand designed. In CNNs, the weights of the convolutional layer being used for feature extraction as well as the fully connected layer being used for classification are determined during the training process. • The improved network structures of CNNs lead to savings in memory requirements and computation complexity requirements and, at the same time, give better performance for applications where the input has local correlation (e.g., image and speech). • Large requirements of computational resources for training and evaluation of CNNs are sometimes met by graphic processing units (GPUs), DSPs, or other silicon architectures optimized for high throughput and low energy when executing the idiosyncratic patterns of CNN computation. In fact, advanced processors such as the Tensilica Vision P5 DSP for Imaging and Computer Vision from Cadence have</s>
<s>an almost ideal set of computation and memory resources required for running CNNs at high efficiency. • In pattern and image recognition applications, the best possible correct detection rates (CDRs) have been achieved using CNNs. For example, CNNs have achieved a CDR of 99.77% using the MNIST database of handwritten digits [59], a CDR of 97.47% with the NORB dataset of 3D objects [60], and a CDR of 97.6% on ~5600 images of more than 10 objects. CNNs not only give the best performance Chapter 2: Convolutional Neural Network: A Review Image Processing Vision and Control Processing Image Processing and CNN Pre-Processing ROI Selection Precise Modeling of ROI Decision Making •Noise reduction •Color space conversion •Image scaling •Gaussian pyramid •Object detection •Background subtraction •Feature extraction •Image segmentation •Connected component labeling •Object recognition •Tracking •Feature matching •Gesture recognition •Motion analysis •Match/no match •Flag events Figure 2.8: Vision algorithm pipeline compared to other detection algorithms, they even outperform humans in cases such as classifying objects into fine-grained categories such as the particular breed of dog or species of bird [61]. • Figure 2.8 shows a typical vision algorithm pipeline, which consists of four stages: pre-processing the image, detecting regions of interest (ROI) that contain likely objects, object recognition, and vision decision making. The pre-processing step is usually dependent on the details of the input, especially the camera system, and is often implemented in a hardwired unit outside the vision subsystem. The decision making at the end of pipeline typically operates on recognized objects—It may make complex decisions, but it operates on much less data, so these decisions are not usually computationally hard or memory-intensive problems. The big challenge is in the object detection and recognition stages, where CNNs are now having a wide impact. 2.3.1 Typical CNN Structure CNN’s structure is inspired by Neocognitron, composed of alternating two types of layers. Layers typically used in convolutional neural networks are listed below: Chapter 2: Convolutional Neural Network: A Review • Input – This layer will hold the raw pixel values of the image, in this case an image of same height and width, and with three color channels R,G,B. • Convolutional - Nodes of a convolutional layer perform convolution on a different parts of the image. This layer serves as a feature extractor. This layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume as output instead of an image. • ReLU (Rectified Linear Unit)-layer will apply an elementwise activation function, such as the max(0, x) thresholding at zero. This leaves the size of the volume unchanged. This layer introduces non-linearity in the system. • Pooling/Subsampling - This layer subsamples feature maps to reduce variance within local regions of the image. Pool layer will perform a down-sampling operation along the spatial dimensions (width, height), resulting in volume of reduced size. It splits</s>
<s>the image into rectangular regions and takes out value determined by the type of pooling layer. The most popular type of pooling layer in CNNs is the max-pooling layer, which extracts maximum value of the sub-regions of the feature map. • Fully connected - As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume and each neuron in this layer takes an input from all the previous layer’s neurons. This layer will compute the class scores, resulting in volume of size 1×1×N. The reasoning of the network is performed by its fully connected layers. • Classifier - Outputs posterior probabilities for each class. Standard convolutional neural network consists of one or more pairs of convolutional layer and subsequent max-pooling layer followed by one or more fully connected layers using rectifying activation function. The output layer is often constructed as a combination of the softmax activation function and the cross entropy loss function (Equation 2.6). Chapter 2: Convolutional Neural Network: A Review 2.3.2. Layers of CNNs By stacking multiple and different layers in a CNN, complex architectures are built for classification problems. Four types of layers are most common: convolution layers, pooling/sub-sampling layers, non-linear layers, and fully connected layers. Convolution Layers The convolution operation extracts different features of the input. The first convolution layer extracts low-level features like edges, lines, and corners. Higher-level layers extract higher-level features. Figure 2.10 illustrates the process of 3D convolution used in CNNs. The input is of size N × N × D and is convolved with H kernels, each of size k × k × D separately. Convolution of an input with one kernel produces one output feature, and with H kernels independently produces H features. Starting from top-left corner of the input, each kernel is moved from left to right, one element at a time. Once the top-right corner is reached, the kernel is moved one element in a downward direction, and again the kernel is moved from left to right, one element at a time. This process is repeated until the kernel reaches the bottom-right corner. For example, when N = 32 and k = 5, there are 28 unique positions from left to right and 28 unique positions from top to bottom that the kernel can take. Corresponding to these positions, each feature in the output will contain 28×28 (i.e., (N-k+1) × (N-k+1)) elements. For each position of the kernel in a sliding window process, k × k × D elements of input and k × k × D elements of kernel are element-by-element STSTAGE 1 NDSTAGE 2 CLASSIFIER 32 x 32 28x108 28 x x5 Filter 5 x5Filter 5 2 x 2 2 x 2 Stage FC 2- convolutions convolutions convolutions Fullconnection subsampling subsampling 14 x 14x108 10 x 10x200 5x200 x 5 7 x 7x108 43 neurons INPUT OUTPUT 100 neurons Figure 2.9: Typical block diagram of a CNN Chapter 2: Convolutional Neural Network:</s>
<s>A Review multiplied and accumulated. So to create one element of one output feature, k × k × D multiply-accumulate operations are required. Figure 2.10: A representation of convolution process Let Wi be a filter set with dimension Ci × Ci−1 × Ni × Ni, where Ci and Ci−1 is the number of channels of the output and input of this layer respectively, and Ni be the square-size parameter of the filters. The parameter, Ci represents the number of filters in the set Wi. Each of the filters has a corresponding bias term, resulting a bias vector bi with Ci number of elements. Hence, the output of this layer is obtained from the output of the previous layer, bias term and corresponding filter set as Xi = Wi ∗Xi−1 +bi (2.16) where ∗represents the linear convolution operation. This operation results in the dimension of output Xi to be Ci×Mvi×Mhi from input Xi−1 with shape Ci−1 ×Mv(i−1) ×Mh(i−1). There is a positive parameter called ’stride’ which can be set to a value that will cause the spatial dimensions to change resulting in up-sampling or down-sampling. The spatial dimensions remain the same when the parameter is set to 1. If it is set to value greater than unity then the dimensions decrease. And, if it is set to a value less than unity, the spatial dimensions increase. A general tendency is to set the parameter to 1 and the dimensionality reduction, when required, is obtained using a pooling layer. In model description, convolution layer is referred to as CN (Ci,Ni). N = input height and width k = kernel height and width D = input depth H=#feature maps S=kernel stride Convolution between kxkxD kernel And region of input feature map Input Feature Map Convolution Output Chapter 2: Convolutional Neural Network: A Review Figure 2.11: A representation of max pooling and average pooling Pooling/Subsampling Layers The pooling/subsampling layer reduces the resolution of the features. It makes the features robust against noise and distortion. Its function is to progressively reduce the spatial size of there presentation to reduce the amount of parameters and computation in the network, and hence to also control overfitting. There are two ways to do pooling: max pooling and average pooling. In both cases, the input is divided into non-overlapping two-dimensional spaces. For example, in Figure 2.9, layer 2 is the pooling layer. Each input feature is 28×28 and is divided into 14×14 regions of size 2×2. For average pooling, the average of the four values in the region are calculated. For max pooling, the maximum value of the four values is selected. Figure 2.11 elaborates the pooling process further. The input is of size 4×4. For 2×2 subsampling, a 4×4 image is divided into four non-overlapping matrices of size 2×2. In the case of max pooling, the maximum value of the four values in the 2×2 matrix is the output. In case of average pooling, the average of the four values is the output. Please note that for the</s>
<s>output with index (2,2), the result of averaging is a fraction that has been rounded to nearest integer. Average Pooling with stride 2 Max Pooling with stride 2 Chapter 2: Convolutional Neural Network: A Review Non-linear Layers Neural networks in general and CNNs in particular rely on a non-linear “trigger” function to signal distinct identification of likely features on each hidden layer. CNNs may use a variety of specific functions —such as rectified linear units (ReLUs) and continuous trigger (non-linear) functions—to efficiently implement this non-linear triggering. ReLU A ReLU implements the function Xi = max (0, Xi−1) (2.17) In other words, only non-negative values are kept as is and the other values are set to zero. So the input and output sizes of this layer are the same. It increases the nonlinear properties of the decision function and of the overall Figure 2.12: A representation of ReLU functionality network without affecting the receptive fields of the convolution layer. In comparison to the other non-linear functions used in CNNs (e.g., hyperbolic tangent, absolute of hyperbolic tangent, and sigmoid), the advantage of a ReLU is that the network trains many times faster. In addition, the ReLU unit helps the neural network to attain a better sparse representation ([52]). It is customary for convolution layer to be followed by ReLU Chapter 2: Convolutional Neural Network: A Review activation. ReLU functionality is illustrated in Figure 2.12, with its transfer function plotted above the arrow. Continuous Trigger (Non-Linear) Function The non-linear layer operates element by element in each feature. A continuous trigger function can be hyperbolic tangent (Figure 2.13), absolute of hyperbolic tangent (Figure 2.14), or sigmoid (Figure 2.15). Figure 2.16 demonstrates how non-linearity gets applied element by element. Figure 2.13: The hyperbolic tangent Figure 2.14: Absolute of function hyperbolic tangent function Figure 2.15: The sigmoid function Figure 2.16: A representation of tanh processing Fully Connected layers Fully connected layers are often used as the final layers of a CNN. These layers mathematically sum a weighting of the previous layer of features, indicating the precise -0.761 0.999 1 1 -1 4 110 80 tanh Chapter 2: Convolutional Neural Network: A Review mix of “ingredients” to determine a specific target output result. In case of a fully connected layer, all the elements of all the features of the previous layer get used in the calculation of each element of each output feature. Figure 2.17: Processing of a fully connected layer Figure 2.17 explains the fully connected layer L. Layer L-1 has two features, each of which is 2×2, i.e., has four elements. Layer L has two features, each having a single element. 2.4. Advantage of CNN over NN: While neural networks and other pattern detection methods have been around for the past 50 years, there has been significant development in the area of convolutional neural networks in the recent past. This section covers the advantages of using CNN for image recognition. Chapter 2: Convolutional Neural Network: A Review • Ruggedness to shifts and distortion in the</s>
<s>image Detection using CNN is rugged to distortions such as change in shape due to camera lens, different lighting conditions, different poses, presence of partial occlusions, horizontal and vertical shifts, etc. However, CNNs are shift invariant since the same weight configuration is used across space. In theory, we also can achieve shift invariantness using fully connected layers. But the outcome of training in this case is multiple units with identical weight patterns at different locations of the input. To learn these weight configurations, a large number of training instances would be required to cover the space of possible variations. • Fewer memory requirements In this same hypothetical case where we use a fully connected layer to extract the features, the input image of size 32×32 and a hidden layer having 1000 features will require an order of 106 coefficients, a huge memory requirement. In the convolutional layer, the same coefficients are used across different locations in the space, so the memory requirement is drastically reduced. • Easier and better training Again using the standard neural network that would be equivalent to a CNN, because the number of parameters would be much higher, the training time would also increase proportionately. In a CNN, since the number of parameters is drastically reduced, training time is proportionately reduced. Also, assuming perfect training, we can design a standard neural network whose performance would be same as a CNN. But in practical training, a standard neural network equivalent to CNN would have more parameters, which would lead to more noise addition during the training process. Hence, the performance of a standard neural network equivalent to a CNN will always be poorer. 2.5. Conclusion In this chapter a brief description of NN and CNN are presented. The reason for applying CNN in CR task is also explained. In the next chapter different models of CNN used in this experiment of Bangla CR task will be presented along with the comparison of their performances in terms of recognition accuracy rates. Chapter 3 Proposed DCNN Architecture 3.1. Introduction This chapter presents the structure of DCNN models used in this experiment. The performances of the different DCNN structures have been used for experiment in this thesis are shown. Number of kernels in different convolution layers, sizes of the kernels, depth of the network and number of neurons in the classifier layers have their effects on the performance of the recognizer. The architecture that gives the best output (Model no. 4) in terms of the recognition accuracy rates are presented in this chapter. 3.2. DCNN Architectures In this work five different architectures of DCNN are used for recognition task and their performances are compared to determine the most optimized network size for better recognition accuracy. Among these five architectures, model 4 gives the best result. The descriptions of the five architectures are given below: 3.2.1. Model 1 Architecture Model 1 consists of 3 convolutional layers and 1 affine (fully connected) layer. It takes 32×32 RGB images as input. 1st, 2nd</s>
<s>and 3rd convolution layers contain 32, 64 and 128 numbers of receptive fields (kernels) respectively. The kernels in all the convolutional layers are of equal size: 3×3. After 1st convolution layer ReLU is used as activation function, but no sub-sampling layer is used. So, after the 1st convolution between 32×32 input image size for each channel (RGB) and 32 nos of 3×3 kernels for each channel, the size of the feature maps become 32×32×32. Padding 1and stride 1 are used for the convolution operation. Appendix Application of ReLU activation does not change the number of parameters. Pooling is not used in the first layer. After each of 2nd and 3rd convolution layers, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 1. Padding 1 and stride 1 are used for the convolution operation. Both pooling height and width are 2. So, after the 2nd convolution between 32×32×32 feature map size and 64 nos of 3×3 kernels, the size of the feature maps becomes 32×32×64. After first pooling, the feature map size reduced to 16×16×64. After 3rd convolution layer with 128 nos of kernel, the feature size becomes 16×16×128. After second pooling, the feature map size reduced to 8×8×128. And at the end one fully-connected layer with 50 neurons are used. Figure Figure 3.1 : Model 1 DCNN architecture for BHCR Input Image 3×32×32 Feature Maps 32×32×32 Feature Maps 16×16×64 Feature Maps 32×32×64 Max Pooling, stride=2 Feature Maps 16×16×128 Feature Maps 8×8×128 Convolution by 3x3 kernels: 128 nos, followed by ReLu activation function Fully Connected Layer Affine layer with 50 neurons Convolution by 3x3 kernels: 32 nos, followed by ReLu activation function Max Pooling, stride=2 Convolution by 3x3 kernels: 64 nos, followed by ReLu activation function Appendix Function Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7,Size: 2×2, Stride = 2 ) Layer 9 (Affine) : L9≡W4L8+B4 Layer 10 (Softmax) : L10≡ SoftMax(L9) 3.2.2. Model 2 Architecture Similar to Model 1, Model 2 consists of 3 convolutional layers and 1 affine (fully connected) layer. It takes 32×32 RGB images as input. 1st, 2nd and 3rd convolution layers contain 32, 64 and 128 numbers of receptive fields (kernels) respectively. The difference between model 1 and Model 2 is: unlike model 1, the kernel size in the first convolutional layer in model 2 is 5×5, the kernel size in the 2nd and 3rdconvolutional layers are of equal size: 3×3. After 1st convolution layer ReLU is used as activation function, but no sub-sampling layer is used. So, after the 1st convolution between 32×32 input image size for each channel (RGB) and 32 nos of 3×3 kernels for each channel, the size of the feature maps become 32×32×32. Padding 2 and stride 1 are</s>
<s>used for the convolution operation. Application of ReLU activation does not change the number of parameters. Pooling is not used in the first layer. After each of 2nd and 3rd convolution layers, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 2. Padding 1 and stride 1 are used for the convolution operation. Both pooling height and width are 2. So, after the 2nd convolution between 32×32×32 feature map size and 64 nos of 3×3 kernels, the size of the feature maps becomes 32×32×64. After first pooling, the feature map size reduced to 16×16×64. After 3rd convolution layer with 128 nos of kernel, the feature size becomes 16×16×128. After second pooling, the feature map size reduced to 8×8×128. And at the end one fully-connected layer with 50 neurons are used. Appendix Figure Figure 3.2 : Model 2 DCNN architecture for BHCR Function Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡ W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡ W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡ W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7, Size: 2×2, Stride = 2 ) Layer 9 (Affine) : L9≡ W4L8+B4 Layer 10 (Softmax) : L10≡ SoftMax(L9) Input Image 3×32×32 Feature Maps 32×32×32 Feature Maps 16×16×64 Feature Maps 32×32×64 Max Pooling, stride=2 Feature Maps 16×16×128 Feature Maps 8×8×128 Convolution by 3×3 kernels: 128 nos, followed by ReLU activation function Fully Connected Layer Affine layer with 50 neurons Convolution by 5×5 kernels: 32 nos, followed by ReLU activation function Max Pooling, stride=2 Convolution by 3×3 kernels: 64 nos, followed by ReLU activation function Appendix 3.2.3. Model 3 Architecture Model 3 is similar to model 2, but unlike model 2, it has 2 affine layers at the end. Model 3 consists of 3 convolutional layers and 2 affine (fully connected) layers. It takes 32×32 RGB images as input. 1st, 2nd and 3rd convolution layers contain 32, 64 and 128 numbers of receptive fields (kernels) respectively. The kernel size in the first convolutional layer in model 3 is 5×5, the kernel size in the 2nd and 3rd convolutional layers are of equal size: 3×3. After 1st convolution layer ReLU is used as activation function, but no sub-sampling layer is used. So, after the 1st convolution between 32×32 input image size for each channel (RGB) and 32 nos of 3×3 kernels for each channel, the size of the feature maps become 32×32×32. Padding 2 and stride 1 are used for the convolution operation. Application of ReLU activation does not change the number of parameters. Pooling is not used in the first layer. After each of 2nd and 3rd convolution layers, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 3. Padding 1 and stride 1 are used for the convolution operation. Both pooling height and width are 2. So, after the 2nd convolution between 32×32×32</s>
<s>feature map size and 64 nos of 3×3 kernels, the size of the feature maps becomes 32×32×64. After first pooling, the feature map size reduced to 16×16×64. After 3rd convolution layer with 128 nos of kernel, the feature size becomes 16×16×128. After second pooling, the feature map size reduced to 8×8×128. And at the end two fully-connected layers with 3000 and 50 neurons respectively are used as classifier. Appendix Figure Figure 3.3 : Model 3 DCNN architecture for BHCR Function Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡ W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡ W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡ W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7, Size: 2×2, Stride = 2 ) Layer 9 (Affine) : L9≡ W4L8+B4 Layer 10 (Affine) : L10≡ W5L9+B5 Layer 11 (Softmax) : L11≡ SoftMax(L10) Input Image 3×32×32 Feature Maps 32×32×32 Feature Maps 16×16×64 Feature Maps 32×32×64 Max Pooling, stride=2 Feature Maps 16×16×128 Feature Maps 8×8×128 Convolution by 3×3 kernels: 128 nos, followed by ReLU activation function Fully Connected Layer Affine layer with 3000 neurons Convolution by 5×5 kernels: 32 nos, followed by ReLU activation function Max Pooling, stride=2 Convolution by 3×3 kernels: 64 nos, followed by ReLU activation function Affine layer with 50 neurons Appendix 3.2.4. Model 4 (Proposed DCNN) Architecture Among the different models used in this experiment, the best DCNN architecture is model 4. Model 4 is almost same to model 3, the only difference is in the number of neurons used in the first affine layer. Proposed DCNN architecture consists of 3 convolutional layers and 2 affine (fully connected) layers. It takes 32×32 RGB images as input. 1st, 2nd and 3rd convolution layers contain 32, 64 and 128 numbers of receptive fields (kernels) respectively. The kernel size in the first convolutional layer in model 4 is 5×5, the kernel sizes in the 2nd and 3rd convolutional layers are of equal size: 3×3. After 1st convolution layer ReLU is used as activation function, but no sub-sampling layer is used. So, after the 1st convolution between 32×32 input image size for each channel (RGB) and 32 nos of 5×5 kernels for each channel, the size of the feature maps become 32×32×32. Padding 2 and stride 1 are used for the convolution operation. Application of ReLU activation does not change the number of parameters. Pooling is not used in the first layer. After each of 2nd and 3rd convolution layers, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 4. Padding 1 and stride 1 are used for the convolution operation. Both pooling height and width are 2. So, after the 2nd convolution between 32×32×32 feature map size and 64 nos of 3×3 kernels, the size of the feature maps becomes 32×32×64. After first pooling, the feature map size reduced to 16×16×64. After 3rd convolution layer with</s>
<s>128 nos of kernel, the feature size becomes 16×16×128. After second pooling, the feature map size reduced to 7×7×128. And at the end two fully-connected layers with 3500 and 50 neurons respectively are used as classifier. There are 2,400 parameters (for each of 3 channels of inputs, 32 numbers of 5×5 sized kernels) as weights and 32 parameters as bias in the first layer of CNN. Layer 2 contains 18,432 weights and 64 bias parameters. There are 73728 no weight parameters and 128 bias parameters in layer 3. Layer four has 21,952,000 weight parameters and 3,500 bias parameters. And final layer (layer 5) contains 175,000 weight and 50 bias parameters. The network contains a total of 22,225,334 no of parameters for weights and biases. Appendix Table 3.1: Parameters setup for DCNN Layer Operation of Layer Number of Feature maps Size of feature maps Size of kernel Number of parameters X Input Layer 3 32×32 - - C1 Convolution 32 32×32 5×5 3×32×5×5+32 =2,432 RL1 ReLU 32 32×32 - - C2 Convolution 64 32×32 3×3 32×64×3×3+64 =18,496 RL2 ReLu 64 32×32 - - S2 Max-pooling 64 16×16 2×2 - C3 Convolution 128 16×16 3×3 64×128×3×3+128 =73,856 RL3 ReLU 128 16×16 - - S3 Max-pooling 128 8×8 2×2 - FC1 Affine 3500 1×1 - 128×7×7×3500 +3500=21955500 FC2 Affine 50 1×1 - 3500×50+50 =175050 Total: 22,225,334 Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡ W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡ W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡ W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7, Size: 2×2, Stride = 2 ) Layer 9 (Affine) : L9≡ W4L8+B4 Layer 10 (Affine) : L10≡ W5L9+B5 Layer 11 (Softmax) : L11≡SoftMax(L10) Appendix Figure Figure 3.4 : Proposed DCNN architecture (Model 4) for BHCR Function Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡ W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡ W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡ W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7, Size: 2×2, Stride = 2 ) Layer 9 (Affine) : L9≡ W4L8+B4 Layer 10 (Affine) : L10≡ W5L9+B5 Layer 11 (Softmax) : L11≡SoftMax(L10) Input Image 3×32×32 Feature Maps 32×32×32 Feature Maps 16×16×64 Feature Maps 32×32×64 Max Pooling, stride=2 Feature Maps 16×16×128 Feature Maps 8×8×128 Convolution by 3×3 kernels: 128 nos, followed by ReLU activation function Fully Connected Layer Affine layer with 3500 neurons Convolution by 5×5 kernels: 32 nos, followed by ReLU activation function Max Pooling, stride=2 Convolution by 3×3 kernels: 64 nos, followed by ReLU activation function Affine layer with 50 neurons Appendix 3.2.5. Model 5 Architecture Model 5 is the most deep network in this study. Model 5 consists of 4 convolutional layers and 2 affine (fully connected) layers.</s>
<s>It takes 68×68 RGB images as input. 1st, 2nd, 3rd and 4th convolution layers contain 32, 48, 64 and 96 numbers of receptive fields (kernels) respectively. The kernel sizes in the convolutional layers in model 5are7×7, 5×5, 3×3and 3×3 respectively. After 1st convolution layer ReLU is used as activation function, but no sub-sampling layer is used. So, after the 1st convolution between 68×68 input image size for each channel (RGB) and 32 nos of 7×7 kernels for each channel, the size of the feature maps become 68×68×32. Padding 3 and stride 1 are used for the convolution operation. Application of ReLU activation does not change the number of parameters. Pooling is not used in the first layer. After each of 2nd convolution layer, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 5. Padding 2 and stride 1 are used for the convolution operation. Both pooling height and width are 2. So, after the 2nd convolution between 68×68×32 feature map size and 48nos of 5×5 kernels, the size of the feature maps becomes 68×68×48. After first pooling operation, the feature map size reduced to 34×34×46. After each of 3rd and 4th convolution layers, ReLU function and MaxPooling (sub-sampling) with stride 2 are used in Model 5. After 3rdconvolution layer with 64nos of kernel, the feature size becomes 34×34×64. After second pooling with2×2 size and 2 stride, the feature map size reduced to 17×17×64. After 4th convolution layer with 96 nos of kernel, the feature size becomes 17×17×96. After third pooling with 2×2 size and 2 stride, the feature map size reduced to 9×9×96.And at the end two fully-connected layers with 3000 and 50 neurons respectively are used as classifier. Appendix Figure Input Image 3×68×68 Feature Maps 68×68×32 Feature Maps 17×17×64 Feature Maps 68×68×48 Max Pooling, stride=2 Feature Maps 17×17×96 Feature Maps 9×9×96 Convolution by 3×3 kernels: 96 nos, followed by ReLU activation function Fully Connected Layer Affine layer with 3000 neurons Affine layer with 50 neurons Convolution by 7×7 kernels: 32 nos, followed by ReLU activation function Convolution by 5×5 kernels: 48 nos, followed by ReLU activation function Feature Maps 34×34×48 Feature Maps 34×34×64 Max Pooling, stride=2 Convolution by 3×3 kernels: 64 nos, followed by ReLU activation function Max Pooling, stride=2 Appendix Function Let, the input image be X (3×32×32), Layer 1 (Conv) : L1≡ W1*X+B1 Layer 2 (ReLU) : L2≡max(0, L1) Layer 3 (Conv) : L3≡ W2*L2+B2 Layer 4 (ReLU) : L4≡max(0, L3) Layer 5 (Pooling) : L5≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L4, Size:2×2, Stride = 2 ) Layer 6 (Conv) : L6≡ W3*L5+B3 Layer 7 (ReLU) : L7≡max(0, L6) Layer 8 (Pooling) : L8≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L7, Size: 2×2, Stride = 2 ) Layer 9 (Conv) : L9≡ W4*L8+B4 Layer 10 (ReLU) : L10≡max(0, L9) Layer 11 (Pooling) : L11≡ 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(L10, Size: 2×2, Stride = 2 ) Layer 12 (Affine) : L12≡ W5L11+B5 Layer 13 (Affine) : L13≡ W6L12+B6 Layer 14 (Softmax) : L14≡ SoftMax(L13) Appendix Table 3.2: Comparison between deep-CNN models Database Network Architecture Input Size Training Parameters Validation Accuracy</s>
<s>rate Conv. Layers Kernel Size Activation function & Pool size, Stride Affine Layers Regularization factor LearniRate Learning rate decay Batch size No. of Epochs Model 1 CMATERdb 3.1.2 3 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 3×3 1st Layer: Conv-ReLU 1 layer: neurons 32×32 0.001 0.0001 0.95 50 50 86.79% 2nd Layer: 64 Kernels 2nd Layer: Kernel size: 3×3 2nd Layer: Conv-ReLU-Pool, 2×2, 2 3rd Layer: 128 Kernels 3rd Layer: Kernel size: 3×3 3rd Layer: Conv-ReLU-Pool, 2×2, 2 Model 2 CMATERdb 3.1.2 3 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 5×5 1st Layer: Conv-ReLU 1 layer: neurons 32×32 0.001 0.0001 0.95 50 75 88.42% 2nd Layer: 64 Kernels 2nd Layer: Kernel size: 3×3 2nd Layer: Conv-ReLU-Pool, 2×2, 2 3rd Layer: 128 Kernels 3rd Layer: Kernel size: 3×3 3rd Layer: Conv-ReLU-Pool, 2×2, 2 Model 3 Combined Dataset 3 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 5×5 1st Layer: Conv-ReLU 2 layers 1st Layer: 3000 32×32 0.001 0.0001 0.95 100 50 89.96% 2nd Layer: 64 Kernels 2nd Layer: Kernel size: 3×3 2nd Layer: Conv-ReLU-Pool, 2×2, 2 2nd Layer: 3rd Layer: 128 Kernels 3rd Layer: Kernel size: 3×3 3rd Layer: Conv-ReLU-Pool, 2×2, 2 Model 3 Combined Dataset 3 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 5×5 1st Layer: Conv-ReLU 2 layers 1st Layer: 3000 32×32 0.001 0.0001 0.95 100 150 92.19% 2nd Layer: 64 Kernels 2nd Layer: Kernel size: 3×3 2nd Layer: Conv-ReLU-Pool, 2×2, 2 2nd Layer: 50 3rd Layer: 3rd Layer: 3rd Layer: Chapter 3: Proposed Convolutional Neural Network Architectures Database Network Architecture Input Size Training Parameters Validation Accuracy rate Conv. Layers Kernel Size Activation function & Pool size, Stride Affine Layers Regularization factor LearniRate Learning rate decay Batch size No. of Epochs 128 Kernels Kernel size: 3×3 Conv-ReLU-Pool, 2×2, 2 Model (proposmodel) Combined Dataset 3 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 5×5 1st Layer: Conv-ReLU 2 layers 1st Layer: 3500 32×32 0.001 0.0001 0.95 100 150 92.20% 2nd Layer: 64 Kernels 2nd Layer: Kernel size: 3×3 2nd Layer: Conv-ReLU-Pool, 2×2, 2 2nd Layer: 3rd Layer: 128 Kernels 3rd Layer: Kernel size: 3×3 3rd Layer: Conv-ReLU-Pool, 2×2, 2 Model 5 Combined Dataset 4 nos. 1st Layer: 32 Kernels 1st Layer: Kernel size 7×7 1st Layer: Conv-ReLU 2 layers 1st Layer: 3000 68×68 0.001 0.0001 0.95 100 150 85.26% 2nd Layer: 48 Kernels 2nd Layer: Kernel size: 5×5 2nd Layer: Conv-ReLU-Pool, 2×2, 2 3rd Layer: 64 Kernels 3rd Layer: Kernel size: 3×3 3rd Layer: Conv-ReLU-Pool, 2×2, 2 2nd Layer: 50 4th Layer: 96 Kernels 4th Layer: Kernel size: 3×3 4th Layer: Conv-ReLU-Pool, 2×2, 2 Chapter 3: Proposed Convolutional Neural Network Architectures 3.3. Conclusion In this chapter a brief description of proposed DCNN model are presented. We can see that there is a significant change in accuracy level for different model architectures. So, number of kernels in different convolution layers, sizes of the kernels, depth of the network and number of neurons in the classifier layers have their effects on the performance of</s>
<s>the recognizer. Among the different models used in this experiment, the best DCNN architecture is model 4, which has 3 convolutional layers and 2 affine (fully connected) layers. 1st, 2nd and 3rd convolution layers contain 32, 64 and 128 numbers of receptive fields (kernels) respectively. The kernel size in the first convolutional layer in model 4 is 5×5, the kernel sizes in the 2nd and 3rd convolutional layers are of equal size: 3×3. In the next chapter, description of the database and the experimental platform will be presented. After that, the characteristics of the learning process and performance of the proposed model with respect to the other techniques of BHCR will be analyzed. Chapter 4 Experimental Results 4.1. Introduction This chapter describes the database and experimental results of this study. At first the problem is defined. After that experimental platform in terms of hardware and software are mentioned. Then the database information, source and sample data are presented. At the end of the chapter, the performance of the proposed models are analyzed and evaluated. 4.2. Experimental Platform The experiment has been conducted on desktop machine (CPU: Intel Core i7-6700K @ 4 GHz, RAM: 16.00 GB, GPU: GeForce GTX 970, Hard Disk Drive: Transcend 128 GB Solid State Drive) in Ubuntu 16.04LTS 64-bit OS (Linux) environment. The algorithm ran on Anaconda 4.2.0 64-bit platform with Jupyter Notebook version 4.2.3. The DCNN algorithm is implemented in Python 2.7.12. List of major library and packages used in the implementation of the algorithm are given in table 4.1. Chapter 4: Experimental Results Table 4.1: Major library and packages used to implement the algorithm Package/ Library Name Version number numpy 1.11.1 nose 1.3.7 cython 0.24.1 matplotlib 1.5.3 pandas 0.18.1 scipy 0.18.1 six 1.10.0 sympy 1.0 4.3. Database 4.3.1. Database CMATERdb 3.1.2 There are two databases used in this experiment. One is CMATERdb 3.1.2 [56] containing 12000 train and 3000 test samples equally distributed among 50 classes of hand-written Bangla characters. CMATERdb is the pattern recognition database repository created at the ‘Center for Microprocessor Applications for Training Education and Research’ (CMATER) research laboratory, Jadavpur University, Kolkata 700032, India. Sample images are given in table 4.2. Chapter 4: Experimental Results Table 4.2: Sample images of CMATERdb 3.1.2 database. More sample images are given in Appendix A. Character Sample Images 4.3.2. Database BBCD Another database is referred to as the “Bangla Basic Character Database (BBCD)” [54], the database of 37,858 samples were randomly subdivided into training and test sets. Samples of this database were collected using three different types of form documents, viz., railway reservation form, job application form, and a tabular form specially designed for data collection. Handwritten samples of various basic characters collected from the name and address parts of the first two types of forms vary widely in number with only a few samples for rarely occurring Bangla basic characters. Some sample images are given in table 4.3. Chapter 4: Experimental Results Table 4.3: Sample images of BBCD database. More sample images are given in</s>
<s>Appendix A. Character Sample Images 4.3.3. Combined Database The both datasets (Database CMATERdb 3.1.2 and BBCD) are combined to form larger dataset containing a total of 52,788 samples subdivided into 28,529 (54.04%) training images, 8,400 (15.91%) validation samples and 15,859 (30.04%) test samples of similar Chapter 4: Experimental Results sizes. The dataset contains wide variation of distinct characters because of different peoples’ writing styles. Some of these character images are very complex shaped and closely correlated with others. This is the largest dataset among all reported BHCR works. 4.4. Training of the DCNN There is no significant preprocessing of the input database. Since the input images are of different sizes, hence to feed the images as the inputs of the DCNN, all the input images are resized into 32×32 images. The images of letters are black in white background, so to reduce computational overhead, images are converted through foreground character black to white and background changed to black. The input images are considered as RGB images containing 3 channels and 8 bit depth per pixel. The images are then normalized to get a zero mean over the complete dataset. For the training of DCNN following factors are used: • Regularization factor : 0.001 • Learning rate : 0.0001 • Learning rate decay factor : 0.95 • Batch size : 100 • No of epochs : 150 • Back-propagation method : RMS propagation with SGD and decay rate = 0.99 • Cost Function : SoftMax Loss function. All weights and bias parameters are initialized randomly using zero mean and unit variance gaussian distribution. Chapter 4: Experimental Results Figure 4.1: Training and Validation accuracy curves versus number of Epoch Figure 4.2: Cost function versus number of Epoch Figure 4.3: Learning rate versus number of Epoch Chapter 4: Experimental Results Figure 4.4: Input images in database (above) and the same images after normalization (below) Figure 4.5 : Sample Kernels of the first convolution layer Chapter 4: Experimental Results Figure 4.6: Feature Maps after the first convolution layer Figure 4.7 : Sample Kernels of the second convolution layer Chapter 4: Experimental Results Figure 4.8: Feature Maps after the second convolution layer Figure 4.9: Sample kernels of 3rd convolution layer Chapter 4: Experimental Results Figure 4.10: Feature Maps after 3rd convolution layer 4.5. Performance Evaluation After 150 epochs of training, the accuracy of the DCNN for BHCR is presented in table 4.4. After 150 epochs proposed DCNN achieves 99.43% recognition accuracy on training dataset, 92.10% recognition accuracy on validation dataset and 91.25% recognition accuracy on test dataset. The confusion matrix of the test samples is given in Table 4.5. From the table number of samples and recognition accuracy for each class can be seen. From the table, it can be seen that the proposed method performs worst to recognize the character “খ (KHA)”. Among 240 samples, it truly recognizes 187 cases (77.92%). In 26 cases (10.83%) the character has been classified as “ঘ (GHA)” and in 7 cases (2.92%) it has been classified as “থ</s>
<s>(THA)” that looks similar even printed form and more difficult in handwritten form. Table 4.4: Accuracy of the DCNN for BHCR No of Epoch Training Accuracy Validation Accuracy Test Accuracy 150 99.43% 92.10% 91.25% Similarly among 316 samples of “ঘ (GHA)” the model truly recognizes 254 cases (80.38%) and in 33 cases (10.44%) it is classified as “খ (KHA)”, in 7 cases (2.22%) it is classified as “ম (MA)” and in 6 cases (1.90%) it is classified as “য (YY)”. The proposed method has shown best performance for “s (ANUS)”. Among 157 samples of “s (ANUS)” the model truly recognizes 156 cases (99.36%) and in 1 case (0.64%) it is classified as “V (TTHA)”. Due to large variation in writing styles, such character images are difficult to classify even by human. Finally, the proposed DCNN misclassifies 1,388 cases out of 15,859 test cases and achieves accuracy 91.25% on test dataset. Table 4.6 and 4.7 present the confusion matrix of the training and validation datasets respectively. It shows 99.43% recognition accuracy on training dataset of 28,529 samples and 92.10% recognition accuracy on validation dataset of 8,400 samples. Table 4.5: Confusion Matrix produced for test dataset (15,859 samples) from DCNN of BHCRChapter 4: Experimental Results Table 4.6: Confusion Matrix produced for training dataset (28,529 samples) from DCNN of BHCRChapter 4: Experimental Results Table 4.7: Confusion Matrix produced for validation dataset (8,400 samples) from DCNN of BHCRTable 4.8: Experimental results showing comparison between proposed DCNN with some state-of-art methods of BHCR in terms of Accuracy and Variance on the same test Dataset of Combined Database and same experimental setup in terms of hardware and software. Serial Classification Methods Test Accuracy Variance 1 kNN 64.878% 0.011354 2 Wavelet (Daubechies) based feature extraction [52] and then kNN classifier 65.439% 0.010489 3 Shallow CNN [55] 78.315% 0.003316 4 AlexNet [59] with last customized layer 80.04% 0.003234 5 DCNN (proposed method) 91.248% 0.001042 Experiments have been carried out on the combined dataset mentioned in article 4.4.3. Experimental results showing comparison between proposed DCNN with some state-of-art methods of BHCR in terms of test accuracy and variance on the same test Dataset of 15,859 samples of Combined Database are presented in table 4.8. The table shows that proposed DCNN method for BHCR outperforms other techniques in terms of both accuracy and variance. Moreover, since no feature extraction or significant preprocessing are needed, computational time required to get result for test dataset is very low compared to some other techniques of the table. It is to be noted that in proposed DCNN method, test accuracy (91.25%) is very close to the validation accuracy (92.10%) during training. It represents good generalization of learning of the network. Table 4.9 represents a comparison of reported results of some prominent works with proposed DCNN on BHCR. Here, we can see that proposed method has been tested over the largest dataset to get result among the state-of-art methods. In this experiment two separate databases are merged together to form a large dataset and many</s>
<s>samples of this combined dataset are challenging to detect. It is notable that proposed method does not employ any feature selection technique whereas many existing methods use single or two stages feature selections. Though, the methods in Refs. [52] and [53] consider 45 and 36 classes respectively by merging or excluding some confusing character, still the table shows proposed method outperforms all other techniques except methods of Ref. [54]. Chapter 4: Experimental Results The recognition techniques that uses Ref. [54] is much complex than others; it uses two recognition stages each one consists of individual feature selection and classification techniques. Besides this, the proposed method without feature selection is very simple. Also, in Ref. [54], significant preprocessing was done database used. As a result, once training is completed, proposed method recognizes the test samples very quickly compared to those which use computationally expensive feature selection stage. Moreover, the dataset used for training, validation and test in the work of Ref. [54] are a portion (database BBCD) of the combined database prepared for the experiment under this work. Table 4.9: Comparison of reported test accuracies of some state-of-art methods with proposed DCNN on BHCR. The work reference Total Classes Database Size of test set Feature Selection Classification Recog. Accuracy Basu et al. [53] 36 - - Longest run, Modified Shadow, Octant-centroid MLP 80.58 % Bhowmick al. [52] Total: 27,000 samples, training samples: 18,000, Validation Samples: 4,500 4,500 Wavelet Transformation MLP 84.33 % Rahman et al. [33] 49 - Not available Multi-stage framework Multiple Experts 88.38% Bhattacharya et al. [43] 50 Total: 20,187 samples, training samples: 10,000 10,187 Chain code histogram feature MLP classifier 88.95% Bhattacharya et al. [35] 50 Total: 24,481 samples, training samples: 15,000. 9,481 Two-stage framework HMM MLP classifier 90.42% Bhattacharya et al. [54] 50 BBCD database containing 37,858 samples. Training samples: 20,000 and Validation Samples: 5000. 12,858 Regular and Irregular Grid based Selection MQDF, MLP 95.84 % Chapter 4: Experimental Results The work reference Total Classes Database Size of test set Feature Selection Classification Recog. Accuracy BHCR-CNN [55] 50 Prepared dataset of 20,000 samples. Training samples: 17,500 2,500 No Shallow CNN 85.96 % Proposed BHCR-DCNN Combined dataset of CMATERdb 3.1.2 and BBCD. Total samples: 52,788. Training samples: 28,529. Validation samples: 8,400. 15,859 No Deep CNN 91.25 % 4.6. Conclusion The chapter touched several achievements of the proposed DCNN architecture by highlighting the results from different aspects. Different state-of-the-art performance metrics are used for evaluating its effectiveness. The proposed DCNN has been trained on the largest database among all reported works on BHCR so far. From all the results and illustrations, it is clearly seen that the proposed methodology has the capacity to outperform many of the existing BHCR recognition approaches for Bangla Characters. Chapter 5 Conclusion 5.1. Summary of the work Inspired by human visual cortex (visual cognition functions of human brain) CNN has the ability to recognize visual patterns directly from pixel images with minimal preprocessing. Therefore, in this thesis CNN structure is investigated without any feature</s>
<s>selection for Bangla handwritten pattern classification. Proposed CNN structure has more depth compared to previous studies for Bangla Hand-written character recognition task. In this work, two large databases are merged together to form one larger database for the recognition task. The outcome has been compared with existing state-of-art methods for Bangla HCR. The proposed method has shown outstanding performance with respect to the exiting methods on the basis of generalized recognition capacity, test set accuracy and robustness in recognition. Since Bangla character set has 50 characters and many of them are similar and the CNN architecture proposed in this thesis is not dependent on specific features linked to character shapes of Bangla language, hence it has more generalized capacity of recognition and robustness in recognition task. So the proposed CNN architecture can also be used for HCR in other languages. Some other state-of-art techniques show good recognition accuracy but they use features that can be applicable to Bangla character set. Chapter 5: Conclusion So, the proposed deep CNN architecture is efficient as well as robust in Bangla HCR. 5.2. Future Scope: There are tremendous scopes of future extension of this work. Some of the scopes are listed out below: • Multiple CNN channels (CNN ensemble) may be used to get majority based decision. Expected error from ensemble is always smaller than the expected error from a single predictor. • Dropout layer may be introduced in the deep CNN model used in this work. Dropout is a regularization technique for reducing over-fitting in neural networks by preventing complex co-adaptations on training data. • Inception module (i.e. different kernel sizes operating in parallel) may be introduced. The idea of the inception layer is to cover a bigger area, but also keep a fine resolution for small information on the images. The idea is that a series of gabor filters with different sizes, will handle better multiple objects scales. With the advantage that all filters on the inception layer are learnable. The most straightforward way to improve performance on deep learning is to use more layers and more data. Study shows that incorporating Inception module increases the accuracy rate. GoogleNet uses 9 Inception modules. https://en.wikipedia.org/wiki/Regularization_(mathematics)https://en.wikipedia.org/wiki/Overfittinghttps://en.wikipedia.org/wiki/Neural_networksChapter 5: Conclusion • Residual Network (ResNet) layers may be introduced by feeding the output of two successive convolutional layer AND also bypass the input to the next layers. The idea of the residual network is use blocks that re-route the input, and add to the concept learned from the previous layer. The idea is that during learning the next layer will learn the concepts of the previous layer plus the input of that previous layer. This would work better than just learn a concept without a reference that was used to learn that concept. • Performance of proposed CNN could be analyzed for Bangla compound characters and digits. References [1] Suen, CY, Berthod, M. and Mori, S., Automatic recognition of handprinted characters—the state of the art., Proc IEEE 68(4):469–487, 1980. [2] Govindan, VK. and Shivaprasad, AP., Character recognition:</s>
<s>a review. Pattern Recognit, 7:671–683, 1990. [3] Trier, OD, Jain, AK. and Taxt, T., Feature extraction methods for character recognition—a survey,. Pattern Recognit 29(4):641–662, 1996. [4] Plamondon, R. and Srihari, SN., On-line and off-line handwriting recognition: a comprehensive survey,. IEEE Trans Pattern Anal Mach Intell 22(1):63–84, 2000. [5] Arica, N. and Yarman-Vural, F., An overview of character recognition focused on off-line handwriting,. IEEE Trans Syst Man Cybern Part C Appl Rev 31(2):216–232, 2001. [6] Cheriet, M., Kharma, N., Liu, C-L. and Suen, CY., Character recognition systems: a guide for students and practitioner, Wiley, New York, 2007. [7] Mori, S., Suen, CY. and Yamamoto, K., Historical review of OCR research and development, Proc IEEE 80(7):1029–1058, 1992. [8] Uchida, S. and Sakoe, H., A survey of elastic matching techniques for handwritten character recognition, IEICE Transactions on Information and Systems E88-D(8): 1781–1790, 2005. [9] Liu, C-L., Sako, H. and Fujisawa, H., Performance evaluation of pattern classifiers for handwritten character recognition, Int J Doc Anal Recognit 4(3):191–204, 2002. [10] Park, H-S, Sin, B-K, Moon, J. and Lee, S-W, A 2-D HMM method for offline handwritten character recognition, Int J Pattern Recognit Artif Intell 15(1):91–105, 2001. [11] Vinciarelli, A. and Bengio, S., Writer adaptation techniques in HMM based off-line cursive script recognition, Pattern Recognit Lett 23:905–916, 2002. [12] Al-Omari, FA and Al-Jarrah, O., Handwritten Indian numerals recognition system using probabilistic neural networks, Adv Eng Inform 18(1): 9–16, 2004. [13] Liu, C-L and Fujisawa, H., Classification and learning methods for character recognition: advances and remaining problems, Stud Comput Intell (SCI) 90:139–161, 2008. [14] Kim, D. and Bang, S-Y, A handwritten numeral character classification using tolerant rough set, IEEE Trans Pattern Anal Mach Intell 22(9):923–937, 2000. [15] Parizeau, M. and Plamondon, R., A fuzzy-syntactic approach to allograph modeling for cursive script recognition, IEEE Trans Pattern Anal Mach Intell 17:702–712, 1995. [16] Hanmandlu, M., Ramana and Murthy, OV, Fuzzy model based recognition of handwritten numerals, Pattern Recognit 40(6):1840–1854, 2007. [17] Dong, J-X, Krzyak, A. and Suen, CY, An improved handwritten Chinese character recognition system using support vector machine, Pattern Recognit Lett 26:1849–1856, 2007. [18] Camastra, F., SVM-based cursive character recognizer, Pattern Recognit 40:3721–3727, 2007. [19] LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P., Gradient-based learning applied to document recognition, Proc IEEE 86(11): 2278–2324, 1998. References [20] Srihari, SN, Cohen, E., Hull, JJ. and Kuan, L., A system to locate and recognize ZIP codes in handwritten addresses, Int J Res Eng Post Appl 1(1):37–56, 1989. [21] Marti, U-V and Bunke, H., The IAM-database: an English sentence database for offline handwriting recognition, Int J Doc Anal Recognit 5:39–46, 2002. [22] Tang, Y., Off-line recognition of Chinese handwriting by multifeature and multilevel classification, IEEE Trans Pattern Anal Mach Intell 20:556–561, 1998. [23] Shi, D., Damper, RI and GUNN, SR, Offline handwritten Chinese character recognition by radical decomposition, ACM Trans Asian Lang Inf Process 2(1):2748, 2003. [24] Lee, SW and Park, JS, Nonlinear shape normalization methods for the recognition of large-set handwritten characters, Pattern Recognit 27(7):895–902, 1994. [25] Yamada, H., Yamamoto,</s>
<s>K. and Saito, T., A non-linear normalization method for handprinted Kanji character recognition—line density equalization, Pattern Recognit 23(9):1023–1029, 1990. [26] Miyao, H., Maruyama, M., Nakano, Y. and Hananoi, T., Off-line handwritten character recognition by SVM on the virtual examples synthesized from on-line characters. In: Proceedings of the eighth international conference on document analysis and recognition, pp 494–498, 2005. [27] Sethi, IK and Chatterjee, B., Machine recognition of constrained handprinted Devanagari, Pattern Recognit 9(2):69–75, 1977. [28] Parui, SK, Chaudhuri, BB, Dutta and Majumder, D., A procedure for recognition of connected hand written numerals, Int J Syst Sci 13:1019–1029, 1982. [29] Dutta, AK and Chaudhuri, S., Bengali alpha-numeric character recognition using curvature features, Pattern Recognit 26:1757– 1770, 1993. [30] Bhattacharya, U., Das, TK, Datta, A., Parui, SK and Chaudhuri, BB, A hybrid scheme for handprinted numeral recognition based on a self-organizing network and MLP classifiers, Int J Patt Recog Artif Intell 16:845–864, 2002. [31] Bhattacharya, U. and Chaudhuri, BB, Fusion of combination rules of an ensemble of MLP classifiers for improved recognition accuracy of handprinted Bangla numerals, In: Proceedings of the eighth international conference on document analysis and recognition, pp 322–326, 2005. [32] Bhattacharya, U. and Chaudhuri, BB, Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals, IEEE Trans Pattern Anal Mach Intell 31(3):444–457, 2009. [33] Rahman, AFR, Rahman, R. and Fairhurst, MC, Recognition of handwritten Bengali characters: a novel multistage approach, Pattern Recognit 35:997–1006, 2002. [34] Bhowmick, TK, Bhattacharya, U. and Parui, SK, Recognition of Bangla handwritten characters using an MLP classifier based on stroke features, In: Proceedings of 11th international conference on neural information processing, pp 814–819, 2004. [35] Bhattacharya, U., Parui, SK. and Shaw, B., A hybrid scheme for recognition of handwritten Bangla basic characters based on HMM and MLP classifiers, In: Proceedings of 6th international conference on advances in pattern recognition, pp 101–106, 2007. [36] Hull, JJ, A database for handwritten text recognition research, IEEE Trans Patt Anal Mach Intell 16:550–554, 1994. [37] Khosravi, H. and Kabir, E., Introducing a very large dataset of handwritten Farsi digits References and a study on their varieties, Pattern Recognit Lett 28:1133–1141, 2007. [38] Al-Maadeed, S., Elliman and D., Higgins, CA, A database for Arabic handwritten text recognition research, In: Proceedings of the eighth international workshop on frontiers in handwriting recognition, p 485, 2002. [39] Su, T., Zhang, T. and Guan, D., Corpus-based HIT-MW database for offline recognition of general-purpose Chinese handwritten text, Int J Doc Anal Recognit 10:27–38, 2007. [40] Saito, T., Yamada, H. and Yamamoto, K., On the database ELT9 of handprinted characters in JIS Chinese characters and its analysis (in Japanese), Trans IECEJ 68-D(4):757–764, 1985. [41] Al-Ohali, Y., Cheriet, M. and Suen, C., Databases for recognition of handwritten Arabic cheques, Pattern Recognit 36:111–121 , 2003. [42] Noumi, T., Matsui, T., Yamashita, I., Wakahara, T. and Tsutsumida, T., Tegaki Suji database ‘IPTP CD-ROM1’ no ichi bunseki (in Japanese). In: 1994 autumn meeting of IEICE, vol D-309, September 1994, 1994. [43] Bhattacharya, U., Shridhar, M. and Parui, SK,</s>
<s>On recognition of handwritten Bangla characters, In: Proceedings of 5th Indian conference on computer vision, graphics and image processing, pp 817–828, 2006. [44] George, A. and Gafoor, F., Contourlet Transform Based Feature Extraction For Handwritten Malayalam Character Recognition Using Neural Network, IRF Int. Conf. Chennai, pp: 107-110, 2014. [45] Nemmour, H. and Chibani, Y., Handwritten Arabic Word Recognition based on Ridgelet Transform and support Vector Machines, IEEE, pp: 357-361, 2011. [46] Moni, B. S., and Raju, G, Modified Quadratic Classifier and Directional Features for Handwritten Malayalam Character Recognition, IJCA Special Issue on Computer Science-New Dimensions and Perspectives, pp: 30-34, 2011. [47] Nusaibath, C. and Ameera, M. P. M., Off-line Handwritten Malayalam Character Recognition using Gabor Filters, Int. J. of Computer Trends and Technology, pp: 2476-2479, 2013. [48] Lecun,Y. and Bengio, Y., Pattern Recognition and Neural Networks, in Arbib, M. A. (Eds), The Handbook of BrainTheory and Neural Networks, MIT Press 1995. [49] Singh, P. and Budhiraja, S., Offline Handwritten Gurmukhi Numeral Recognition using Wavelet Transforms, I. J Modern Education and Computer Science, pp: 34-39, 2012. [50] Chen, G. Y. and Kegl, B., Invarient Pattern Recognition using Contourlets and Adaboost, Pattern Recognition Society Elsevier, pp: 1-13, 2012. [51] Gonzalez, A., Bergasa, L. M., Yebes, J. J., and Bronte, S, A Character Recognition Method in Natural Scene Images, Pattern Recognition (ICPR), pp: 621-624, 2012. [52] T. K. Bhowmik, P. Ghanty, A. Roy and S. K. Parui, SVM-based hierarchical architec-tures for handwritten Bangla character recognition, International Journal on Document Analysis and Recognition, vol. 12, no. 2, pp. 97-108, 2009. [53] S. Basu, N. Das, R. Sarkar, M. Kundu, M. Nasipuri and D. K. Basu, A hierarchicalapproach to recognition of handwritten Bangla characters, Pattern Recognition, vol. 42, pp. 1467–1484, 2009. [54] Bhattacharya, U., Shridhar, M., Parui, S. K., Sen,P. K. and Chaudhuri, B. B., Offline recognition of handwritten Bangla characters: An efficient two-stage approach, Pattern Analysis and Applications, vol. 15, no. 4 , pp. 445-458, 2012. References [55] Rahman, Md. M., Akhand, M. A. H., Islam, S., Shill, P. C. and Rahman, M. M. H.,Bangla Handwritten Character Recognition using Convolutional Neural Network, Int.J. Image, Graphics and Signal Processing, vol. 08, pp. 42-29, 2015. [56] Center for Microprocessor Application for Training Education and Research Retrived July 10, 2017 from https://code.google.com/archive/p/cmaterdb/ [57] Kaur, K., and Garg, N. K., Use of 40-point Feature Extraction for Recognition of Handwritten Numerals and English Characters, IJCTA, pp: 1409-1414, 2014. [58] Aggarwal, A., Rani, R. and RenuDhir, Handwritten Devanagari Character Recognition Using Gradient Features, Pattern Recognition (ICPR), pp: 621-624, 2012. [59] Ciresan, Dan, Meier, U., and Schmidhuber, J., Multi-column deep neural networks for image classification, IEEE Conference on Computer Vision and Pattern Recognition (New York, NY: Institute of Electrical and Electronics Engineers (IEEE)), 2012. [60] Ciresan, Dan, Meier, U., Masci, J., Gambardella, L.M., and Schmidhuber, J., Flexible, High Performance Convolutional Neural Networks for Image Classification, Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence-Volume Two: 1237–1242, 2013. [61] Russakovsky, O., ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer</s>
<s>Vision, 2014. https://code.google.com/archive/p/cmaterdb/Appendix A Table A.1: Samples of Database CMATERdb 3.1.2: Character Sample Images Character Sample Images Character Sample Images Character Sample Images Character Sample Images Table A.2: Samples of Database BBCD: Character Sample Images Character Sample Images Character Sample Images Character Sample Images Character Sample Images</s>
<s>A Real Time Approach for Bangla Text Extraction and Translation from Traffic Sign2018 21st International Conference of Computer and Information Technology (ICCIT), 21-23 December, 2018 978-1-5386-9242-4/18/$31.00 ©2018 IEEE A Real Time Approach for Bangla Text Extraction and Translation from Traffic Sign Saimoom Safayet Akash CSE Dept. United International University Dhaka, Bangladesh saimoom.akash@gmail.com Sandip Kabiraz CSE Dept. United International University Dhaka, Bangladesh 7458sanju@cse.uiu.ac.bd Sabbir Arif Siddique CSE Dept. United International University Dhaka, Bangladesh sabb.a.sidd@gmail.com Ishteaque Alam CSE Dept. United International University Dhaka, Bangladesh ialam123006@cse.uiu.ac.bdSM Ashraful Islam NLP Team eGeneration Ltd. Dhaka, Bangladesh ashraf.islam@egeneration.com.bd Mohammad Nurul Huda CSE Dept. United International University Dhaka, Bangladesh mnh@cse.uiu.ac.bd Abstract—This paper has developed and demonstrated a system to build traffic instruction detection and translation tools that can extract and convert Bangla text from natural images containing traffic instruction. In the process of developing the system, we have applied various techniques to extract and convert information from natural images. These techniques involve Image Processing, Machine Learning, Optical Character Recognition and Machine Translation. The proposed system consists of three steps, which are Text extraction from image, Post Processing by Language Model and Machine Translation. Keywords—Optical Character Recognition, Image Processing, Machine Translation, Language Model. I. INTRODUCTION The problem of understanding traffic signs in Bangla has been identified as a major problem for the foreigners. As these traffic signs contain both images of visual traffic signal along with Bangla text, it is nearly impossible to acknowledge the signs for Figure 1: Traffic Signs in Bangladesh. a foreign citizen. Figure 1 is an illustration of a few existing traffic signs found on the roads of Dhaka. Moreover, placement of traffic signs does not follow any international standard. Therefore, it may be rather difficult for non-local residents to find the signs without much effort. In our paper, we have proposed a state-of-the-art solution to address the mentioned problems. In this study we have used image processing mechanism and machine translation in this purpose. The main goal of our image processing part of this research is to analyze a captured image, find, and segment the Bangla letters from there. In addition, we have also incorporated an efficient machine translator to translate the extracted Bangla text into English and other major languages. The paper [1] proposed a novel system for the automatic detection and recognition of text in traffic signs. The authors have proposed a system in their work, which is capable of defining search area within the image. The paper [2] has recommended a system that can detect and recognize instruction from traffic signals. The authors of this paper have proposed a system to integrate in the Advanced Driver Assistance System (ADAS). We have recognized and implemented techniques expressed and illustrated in this paper. Moreover, we have incorporated additional techniques to improve the outcome of the Bangla OCR. These techniques include Edge Detection using Canny method [3], Gaussian filter [4], Edge Tracking by Hysteresis [5], B/W labeling, Character Segmentation [6], Character Recognition through Back Propagation Neural Network [7] to process the text extracted</s>
<s>from the image and Example Based Machine Translation [8] algorithm. Our proposed method for Bangla detection and translation from traffic signs is comprised of three stages. The first stage detects the traffic signs from natural images. In consequence, the second stage extracts Bangla text from the natural image. In the final stage, the text is translated into English. This paper represents the first endeavor in developing a traffic sign detection and translation system for Bangla language. Although Google and Bing have similar products, they however do not have support for Bangla yet. This paper is organized in the following manner, Section II emphasizes on the previous study that had been conducted by other prominent researchers in the related fields. Section III provides a detailed explanation about the proposed system along with all relevant diagrams and illustrations. Section IV presents all experimental analysis and results of the executed procedures. In section V, we have concluded the paper by identifying areas of improvement and the plan to accommodate these future developments. In the last section, there is acknowledgement and references of the works that have helped and guided us in conducting this research. II. Previous Study There are many works for Bangla OCR from documents like Bangla OCR by UIU [9] and first commercial OCR “Puthi OCR” [10] by Team Engine. Most prominently there are two notable thesis work for Bangla OCR from image. The first one is from Khulna University by Zahid et.al [11] and other one is from Computer Vision & Pattern Recognition Unit, Indian Stat. Inst., Kolkata, India [12]. In this research, we have incorporated techniques analyzed from the above-mentioned sources and combined them into a single system application. III. Proposed System The proposed system processes the captured images and converts them into English instructions. Distinct modules of the system execute in sequence to acquire the targeted goal from the input. Each of these modules employs diversified tools and contemporary algorithms. These modules are explained with demonstration and relevant diagrams in the following section. The proposed system is illustrated in the system diagram in Figure 2. III.A. Image Processing Captured image that contains Bangla traffic instruction is processed through a sequence of techniques, which are clarified by demonstration in the following sub sections. III.A.1. Pre-Processing After the natural image is captured, the preprocessing mechanism is conducted on the image. The input and output of the process is illustrated in Figure 3. Preprocessing resizes and adjusts the RGB value of the captured images. The outcome of this stage is the B/W image with the corrected proportion. Figure 2: Proposed System Diagram. Figure 3: Capturing and Preprocessing of Natural Image. III.A.2. Pre-Filtering On completion of the preprocessing pre-filtering is applied on the processed image. The pre-filtering of the image is conducted by employing Edge Detection by Canny Edge Detection Method. The Canny Method is less likely than other methods to be fooled by noise. The general criteria for edge detection include the following steps. I. Detection of edge with low error</s>
<s>rate, which means that the detection should accurately catch as many edges shown in the image as possible. II. The edge point detected from the operator should accurately localize on the center of the edge. III. A given edge in the image should only be marked once, and where possible, image noise should not create false edges. After edge detection process is conducted, Gaussian filter [4] is applied on the output to further fine-tune the detected edges. The equation for a Gaussian filter kernel with the size of (2k+1) * (2k+1) is shown as following: Here is an example of a 5x5 Gaussian filter, used to create the image to the right, with = 1.4. Here the asterisk denotes a convolution operation. After applying the filter, the intensity gradient of the image is established. The edge detection operator (Roberts, Prewitt, and Sobel for example) returns a value for the first derivative in the horizontal direction (Gx) and the vertical direction (Gy). From this, the edge gradient and direction can be determined by the following equations. In consequence, edge-thinning technique termed Non-maximum suppression is enforced on the produced output. After application of non-maximum compression, the edge pixels are quite accurate to present the real edge. However, there are still some edge pixels at this point caused by noise and color variation. In order to get rid of the spurious responses from these bothering factors, it is essential to filter out the edge pixel with the weak gradient value and preserve the edge with the high gradient value. Thus, two threshold values are set to clarify the different types of edge pixels, one is called high threshold value and the other is called the low threshold value. On resolving the double threshold value, edge tracking is conducted by Hysteresis. Afterwards structural elements of the image are extracted and then dilated. On the completion of the above-mentioned processes, the cropped images are acquired. The cropped elements along with some garbage is illustrated in Figure 4. Figure 4: Cropped Elements after Pre-Filtering. III.A.3. Filtering Filtering techniques is further applied on the pre-filtered output. These techniques include range estimation of the pre-filtered output. In the mentioned process, the garbage elements are removed and actual Bangla texts from the image is revealed. The flow diagram of the filtering process is illustrated Figure 5. Figure 5: Flow Diagram of Filtering. The output of this stage is illustrated in Figure 6. Figure 6: Filtered Image. III.A.4. Character Segmentation The character segmentation process segments the characters in two categories. The first category is Characters without KAR (Bangla -কার). Figure 7: Characters without KAR. The second category is characters with KAR. Figure 8: Characters with KAR. Therefore, the final output will be like the illustration in Figure 9. Figure 9: Final Output of Character Segmentation. As we train our NN with black letters which have white background so after segmenting those letters we simply reverse the black pixel with white pixels of each letters and the output is given in</s>
<s>Figure 10. Figure 10: Output after Pixel Reversing. In addition, for better image processing we reshape the image into a constant height and width. We use 45 × 45 (= 2025) constant shapes for each letter. This output is illustrated in Figure 11. Figure 11: Characters after Reshaping. The flow diagram of the character segmentation is illustrated below in Figure 12. Figure 12: Flow Diagram of Character Segmentation. III.B. Character Recognition and Post-Processing After segmentation process, the output must be converted into machine-readable text. Neural network is employed to generate that conversion. However, the output of the neural net may contain a few garbage, which must be eliminated to extract clean text. The processes in Figure 12 are detailed in the following sub-sections. III.B.1. Character Recognition Using BP ANN Backpropagation Artificial Neural Network (BP ANN) is employed in the proposed system to convert the segmented characters into electronic text. The text is retrieved in Unicode font. Backpropagation (BP) artificial neural network is the most commonly used algorithm in OCR, as it is highly effective in the given context. A typical BP ANN is illustrated in Figure 13. BP ANN employs the following technique to extract the electronic character from the character segmentation output, which is depicted in Figure 14. Figure 13: Back Propagation Artificial Neural Network. III.B.2. Garbage Detection and Deletion After character segmentation, post processing is conducted. Post processing is primarily consisted of garbage detection and deletion. To detect garbage from multi characters we will perform a partial string matching. Partial string matching is an approach to identify garbage value and useful to predict words from a partially correct word. Therefore, here is our algorithm. • Split the result string. • Iterate through all words. • if(word.length > 1) Perform partial matching for each of the Bangla words in dictionary. Find the best matched Bangla words and return. Figure 14: Flow Diagram of Character Recognition Afterwards Levenshtein’s Distance [13] is employed to acquire best matching strings from the string dictionary. In Figure 15 there is an illustration of an extracted character string with garbage values. Figure 15: Sample Output with Garbage. Now the first ষ in Figure 15 will be removed as that will not be partially matched with any word. গ in Figure 15 will also be removed, as it is a single character. Instead of detecting সেবা our BP ANN returns সেবােষ but it’s partially matched so that will be replaced with correct one! Partially matched with সেবা because it will need three moves to transform one to another, which is minimum among other words in dictionary, and similarity between two words is 62%. Hence after post processing of the sample output we acquire the clean and authentic string as illustrated in Figure 16. Figure 16: Output after Post Processing. III.C. Machine Translation [8] Now we have successfully extracted the authentic and clean text from the natural image. The next step is to convert the Bangla text into English. Machine Translation is a process of translating one</s>
<s>word/sentence to another language’s corresponding word/sentence. Machine translation is a complex problem because there are thousands of things that are needed to be considered. In basic level, we can just replace the words in a sentence with corresponding word in target language. That is not able to produce a good translation as the sentence structures are different and the recognition of whole phrases with their closest counterparts in the target language is needed. The approach [8] that we have taken in this paper is illustrated in Figure 17. Figure 17: Flow Diagram of Garbage Deletion Process. This approach can successfully translate most of the common traffic instructions. However, the English meanings that are constructed using multiple Bangla words is not considered here. The process of our approach of machine translation is illustrated in the Figure 18. The translated output of the extracted Bangla text is shown in Figure 19. Figure 18: Flow Diagram of Machine Translation. Figure 19: Machine Translation of Extracted Bangla Text. IV. EXPERIMENTAL RESULT AND ANALYSIS The image of traffic instructions is very rare in internet. In fact, the traffic instructions are hard to find. Therefore, we really did not manage to get a plenty number of images for training and testing. That is one of the biggest difficulties that we have faced. Therefore, we have to test our system for limited training and testing data. • Training Image Corpus o Size: 22 Images, Format: JPG • Test Image Corpus o Size: 6 Images, Format: JPG • Training and Test Corpus are Unique. • Number of NN training characters: 4 Different versions of 18 letters. (from 22 pictures) • Image Capturing Weather conditions: Sunny • Image Testing Conditions: Glare free and not angled. The configuration of the BP ANN is as follow. • Input layer: 45 × 45 (=2025) • Output layer: 18 (For 18 characters) • 2nd layer nodes: 687 • 3rd layer nodes: 224 • Primary training set: 4 with different versions of 18 letters. • Adjustment value: 2 • Minimum error: 1.1 • Corpus Sentence for Pattern Matching: 4 • Number of words in Traffic Instruction Database: 45 IV.A. Experimental Setup Image Processing: MatLab Image Processing Libraries. • Neural Network: BPSimplified Library for C# • Machine Translations: C# • Application Type: Desktop WinForm Application (C#) combined with Matlab scripts • Application Environment: Windows 8.1 64 bit A demo corpus of pattern matching is shown in Table 1, and demo directory data and traffic data are shown in Figures 20 and 21 respectively. Table 1: Demo Corpus for Pattern Matching. Bangla Sentence English Sentence পািকং িনেষধ No parking পথচারী চলাচল িনেষধ No Pedestrian সামেন ট-জাংশন আেছ T-junction সামেন পথচারী পারাপার Pedestrian Crossing Figure 20: Demo Directory Data Figure 21: Demo Traffic Image Corpus IV.B. Experimental Result The demo experimental result is shown in Table 2 and the actions of the software are illustrated in Figure 22. Table 2: Demo Experimental Result. Input Sentence Bangla Output Sentence English সামেন স সত আেছ Narrow Bridge সামেন ু</s>
<s>ল School হাসপাতাল Hospital সামেন ওয়াই-জাংশন আেছ Y-Junction থামােনা িনেষধ No Stopping হণ বাজােনা িনেষধ No Horn Honking সেবা গিতসীমা Highest Speedlimit িবপদজনক খাদঁ Dangerous Dip বনেভাজন এলাকা Picnic Site রা যাপেনর ব ব া আেছ Overnight Accomodation Way সামেন ডানিদেক আচমকা মাড় Right Sharp Bend সামেন ডানিদক হেত রা া স হেয়েছ Right From Road Narrow Figure 22: Software Demonstration. V. Conclusion and Future Works In this research work, state of the art algorithms to translate Bangla Traffic sign into English for Foreigners were implemented. Because Canny edge detection method is applied in the pre-filtering process to detect edges from the captured image, it will be less prone to get deceived by noise. Consequently, the system is able to analyze signs manipulated with rain, leaves and dirt and produce output that is quite accurate. In the process of conducting the research work, we have identified a number of constraints and area of improvements. The most notable of them are listed as following. • Limited Size of the training corpus • Limitation of OCR for angled photos • Image adjustment is not dynamic • Overfitting of data from the Neural Network • Machine translation needs optimization The authors of the paper have acknowledged the limitations as stated above and constructed a strategy to remove the limitations through incorporating a sizeable training image corpus. In addition, better optimization of the employed algorithms will further remove any inaccuracy and facilitate the system to execute more efficiently. In future, the authors would like to incorporate the following development into the proposed system, • Remove all the limitations. • Integrate multi-layer Back Propagation Artificial Neural Network. • Extend the system into mobile application. • Complete Bangla OCR for natural images. • Develop a Bangla-English translation engine. • Integrate text to speech facilities also. Moreover, the authors would like to incorporate driver movement detection techniques through accelerometer, gyroscope and compass sensor data to align and compare that with the instruction from the traffic sign. Therefore, if the driver’s movement data is conceived as illegitimate according to the traffic signs, the system will generate a warning sound. REFERENCES [1] Jack Greenhalgh, Majid Mirmehdi, “Recognizing Text-Based Traffic Signs,” IEEE Transactions on Intelligent Transportation Systems, Volume 16 Issue 3, June 2015 [2] Swati M, K.V. Suresh, “Automatic traffic sign detection and recognition - A Review,” 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET) [3] J. Canny, "A Computational Approach to Edge Detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, Nov. 1986. [4] K. Ito, "Gaussian filter for nonlinear filtering problems," Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187), Sydney, NSW, 2000, pp. 1218-1223 vol.2. [5] Yan Zhang and Lee Makowski, "Auto-thresholding Edge Detector for bio-image processing," 2015 41st Annual Northeast Biomedical Engineering Conference (NEBEC), Troy, NY, 2015, pp. 1-2. [6] A. Rehman, "Offline touched cursive script segmentation based on pixel intensity analysis: Character segmentation based on pixel intensity analysis," 2017 Twelfth</s>
<s>International Conference on Digital Information Management (ICDIM), Fukuoka, 2017, pp. 324-327. [7] Y. Li, Y. Fu, H. Li and S. Zhang, "The Improved Training Algorithm of Back Propagation Neural Network with Self-adaptive Learning Rate," 2009 International Conference on Computational Intelligence and Natural Computing, Wuhan, 2009, pp. 73-76. [8] Linsen Yu, Yongmei Liu and Tianwen Zhang, "Using Example-Based Machine Translation Method For Automatic Image Annotation," 2006 6th World Congress on Intelligent Control and Automation, Dalian, 2006, pp. 9809-9812. [9] M. S. Ahmed, T. Gonçalves and H. Sarwar, "Improving Bangla OCR output through correction algorithms," 2016 10th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Chengdu, 2016, pp. 338-343. [10] Team Engine, “Puthi”, Google Play Store, Version 1.1, Updated Feb 22, 2018. https://play.google.com/store/apps/details?id=com.gigatech.mobile.puthi&hl=en [11] Md. Zahidul Islam and Amit Kumar Mondal,"Towards a Standard Bangla PhotoOCR: Text Detection and Localization", in Proceedings of 17th International Conference on Computer and Information Technology (ICCIT), 22-23 December, 2014, Dhaka, Bangladesh, pp.198-203 [12] L. Rothacker, G. A. Fink, P. Banerjee, U. Bhattacharya and B. B. Chaudhuri, Bag-of-Features HMMs for segmentation-free Bangla word spotting, ACM Proc. of the 4th International Workshop on Multilingual OCR (MOCR 2013), held in Washington DC, USA, on August 24, Article No. 5, 2013 [13] S. Konstantinidis, "Computing the Levenshtein distance of a regular language," IEEE Information Theory Workshop, 2005., Rotorua, 2005 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic</s>
<s>/CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium</s>
<s>/ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10</s>
<s>/XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>2018iSAI_NLP_Model978-1-5386-8458-0/18/$31.00 ©2018 IEEE Model for Handwritten Recognition Based on Artificial Intelligence Narumol Chumuang1 and Mahasak Ketcham2 1Department of Digital Media Technology, Muban Chombueng Rajabhat University, Thailand. 2Department of Management Information System, King Mongkut's University of Technology North Bangkok, Thailand. lecho20@hotmail.com1 and mahasak.k@it.kmutnb.ac.th2 Abstract—This paper proposed a general algorithm for more efficient handwritten recognition. Using handwritten recognition algorithms can reduce the time it takes to convert documents into letters for reducing the workload. The handwritten fonts used in this thesis are multi-script, which consists of Bangla font, Latin, MNIST handwritten alphabet series on prescription. This step has been designed and developed with genetic algorithms in conjunction with artificial intelligence techniques. The result of this algorithm was designed and developed to produce accurate results in the recognition of the Bangla set is 94.05 %, Latin 98.58 %, and MNIST 100 %. Keywords— handwritten, recognition, genetic algorithm, artificial intelligence, multi-scripts I. INTRODUCTION Handwritten Recognition (HR) is a challenging issue in the field of pattern recognition and artificial intelligence. [1] [2] [3] and [4]. These methods facilitate the translation of various forms of correspondence such as letters, postcards, history, inscriptions, Bai Lan books, newspaper texts, and other documents are not limited. The handwriting complexity is classified into three levels, simple, moderate and difficult, as shown in Fig. 1 to Fig. 3, respectively. From the research to develop a handwriting recognition system that is challenging. [5] and [6], and has over 1,000 images of lilac images [7] to transform the leaves into digital form. However, it was found that only a partial leaf only. Most of the Bai Lan inscription textbook. Ancient traditions historic local and other valuable histories are in damaged condition and some are destroyed due to storage. This is another reason why researchers around the world recognize the importance of recognition. The interpretation of the handwriting character by developing techniques and methods such as improvement of character classification techniques. The accurate and rapid classification for accurate information retrieval [8], sound classification [9], stock price forecasting [10]. The relationship between laboratory findings in a hospital, medications and problems of patients [11]. Handwritten Recognition System (HRS) are also widely used in business circles, such as bank checks [12], postal postcards [13] and postal code recognition [14]. Benefit from this research. [15, 16]. Identification refers to the process of identifying writers and examining them. Validate documents It is widely used, such as the court of justice, and for the automatic signature of applications for banking transactions. Fig.1. A sample of handwritten. Fig.2. A moderate of handwritten. Fig.2. A difficult of handwritten. II. RELATED WORKS A. Data set Bangla or Bengali is the second most popular language in India and Bangladesh as show in Fig.4 and is the fifth most used language in the world [17]. Fig. 4. The variety in the writing style of Bangla's handwritten data series. Latin handwriting character data set was prepared by der Maaten [18]. The original image was compiled by Schomaker and Vuurpijil for identifying the forensic names and using</s>
<s>the Firemaker data set as handwritten notes with Dutch characters as show in Fig. 5. Fig. 5. Example of Latin character data set. The MNIST data set is a subset of the NIST data set [19], a digitized image that is scaled to the standard and centered on a handwritten static size image. The image is 28 × 28 pixels, with handwritten images of the MNIST data set with grayscale. Fig. 6. Example of MNIST character data set. B. Support Vector Machine: SVM The SVM algorithm is invented and invented by Vapnik [20]. It is intended to be applied to many types of recognition problems. Linear SVM problems for two sets of data. The SVM algorithm is very useful for two classification problems. The algorithm will find the best hyperplanning, with the maximum distance at the training point close to the hyperplane. The training point closest to the split hyperlink is called the original SVM support vector, a linear binary classification, which is useful for two-level classification problems. On the other hand, poor separation for complex data is not disrupted. For example, image data now describes the SVM as D a training set. }1),,{( MiyxD ii ££= (1) i Rx Î is a vector input. }1,1{ -+Îiy is a binary format. The best model from a set of hyperplanes NR calculated by the SVM optimization algorithm, decision functions are provided. T cwwsignxf()( e (2) when w is a weight vector that is perpendicular to the hyperplane and b is a bias value. To calculate the value w and b the SVM algorithm reduces the following functions: T cwwwJ1),( ee (3) Subject to restrictions: T bxw e-+³+ 1 for 1+=iy (4) and T bxw e+-£+ 1 for 1-=iy (5) when C control errors between training errors and general conclusions 0³ie is a weak variable that can tolerate some errors but must be minimized while this soft edge method is used to fit a complex data model. If used incorrectly, over fitting can occur. The maximum difference separates the hyperplanes 0=+ bxwT . Hyperplanine separates the largest distance to the closest plus 1+=+ bxwT and negative 1-=+ bxwT linear kernel functions are defined as follows. iji xxxxK =),( (6) Non-linear SVM issues for multiple groups. The SVM linear algorithm is extended to deal with non-linear multivariate classification problems by creating and combining multiple binary identifiers. [106] It offers nonlinear kernel functions. Many matches In this topic, the researcher selects the basic function Radial Basis Function (RBF) as a non-linear similarity function in the SVM classifier. The RBF kernel calculates the similarity between the two inputs as (7). )exp(),(jiji xxxxK --= g (7) when g as the kernel parameters of the RBF kernel, the value of a lot of parameters can cause over fitting due to the increase in the number of SVM. III. PROPOSE METHODOLOGY In this paper, we investigate the appropriate method for developing a new algorithm for handwriting recognition based on the conceptual framework of research. Fig.7. Flow chart of</s>
<s>feature selection with genetic steps. The details of the process diagram can be described as follows: Step 1: Start with random pairing of feature. Step 2: Evaluate the chromosome feature group with the objective function. Because the system can not understand the chromosome value within the genome, it must decode it before calculating the RMSE. Step 3: Calculate fitness function and then give feedback to GA. Step 4: Use the appropriate chromosome selection to determine the origin of the species. It is used to represent the next generation. Step 5: The origin of the breed was created by genetic work. Chromosomes in the genome are present. Step 6: Calculate the chromosomes of the offspring. (Use the same procedure as in step 3) Step 7: Chromosomes in the population are replaced by descendants of step 5. Some specific features are replaced by judges with impaired values. Step 8: Start repeating from Step 2 until the answer is right. The answer must come from the best chromosome in itself which can be used to evaluate the value of the required RMSE answer. Finally, in the experiment and the results show more accurate results of this system compared to the normal SVM algorithm. A. Chromosome Encoding Genetic algorithms meet the answer from the population. Individual answers are specific to the chromosome or genome. The details and procedures can be described as follows. Chromosome Encoding Scheme is the first important step in genetic algorithms. It was designed for using chromosomes as a response agent from the system. In this study, a set of 100 chromosomes represents a set of 100 extracted features, defined as (8). A= [a1, a2, a3, … an] (8) when A is the chromosome representation of the handwriting feature described above, and each i, i = 1, 2, 3, ..., n is the answer to each variable in the system and the coding algorithm. B. Fitness Function Fitness function is a function for evaluating the behavior of the algorithm designed by the researcher as a target value. The purpose of this function is to determine the suitability of each chromosome between group and chromosomal arrangement. These are used as gauges. Selected for the chromosome used in descent in the next version. Accuracy of character recognition is the exercise function for this document. f = 100 – % Error (9) when %Error = Relative error x 100 (10) (11) where xmea is the value of character recognition. xt is the number of characters to remember. The suitability of each chromosome will be utilized in the various stages of the genetic algorithm in the next section. The next section discusses the selected chromosomes, which are key criteria for selecting the appropriate features. C. Selection This is a random selection of subgroups. The population and the strongest population in the subgroup will be selected. The next The selection method in this document uses the tournament selection method. D. Uniform crossover Randomly selected population and crossed after the selected location from random or exchange. In</s>
<s>this research, Uniform Crossover is used. The point on the chromosome may be the cutoff point and the crossover mask is used to aid the crossover uniformity. The mask is a binary type and size is the number of bits equal to the length of the chromosome. The value of the mask in different positions indicates the crossover between the origin of the species. E. Mutation Mutations are genetic algorithms that can be avoided by the optimal answer to the best local results by preventing the change in the chromosome population in the same way as at that point. Probability of mutation 0.1 m=1/n (12) where n is the total number of attributes of the handwriting feature. The details of the genetic process. IV. EXPERIMENTAL AND RESULTS Based on the research process described in later, the researcher tested the efficiency of the algorithms and techniques presented. By finding the accuracy of the proposed algorithm, in reporting this section. This papers deals with recognizing handwritten characters from 3 groups of characters from different authors written based on artificial intelligence. The challenges of these characters are similar in some classes. Bangla is the second most popular language in India and Bangladesh. The Bangla Handwriting Handbook consists of 45 classes and 4,627 classes for the training set and 900 examples in the test suite. Latin handwritten characters are written in Dutch. 251 writers have 37,616 handwriting characters, 26,329 practice examples and 11,287 sample samples. MNIST data sets are large databases of numbers. Handwriting is often used for training image processing systems. The image size is 28 × 28 pixels. TABLE I. SUMMARIZES HANDWRITTEN DATA SERIES Data set Class Training set Testing set Bangla 45 4,627 900 Latin 25 26,329 11,287 MNIST 10 60,000 10,000 Evaluation the handwritten character recognition system to recognize or show measured values close to actual values. Calculate the accuracy / precision using as equation (13). % Accuracy = 100 - % Error (13) This paper compared the method for selecting features in handwritten scripting scripts using SVM, kNN, and MLP as classifiers. Comparison of handwriting Bangla Latin and MNIST. The results of the classification accuracy are shown in the table II. TABLE II. ACCURACY RATES USING 3 HANDWRITTEN CHARACTER SETS FROM THE COMPARISON (%) OF THE SVM, KNN, AND MLP Data set Accuracy rate SVM kNN MLP Bangla 94.05 85.60 90.50 Latin 98.58 96.31 93.79 MNIST 100.00 95.11 99.48 From Table II the three data sets of handbook items were tested using the popular algorithm for character recognition. The k-Nearest Neighbor (kNN) and multi-layered Perceptron (MLP) support algorithms show that each handwriting recognition algorithm is The accuracy of handwriting recognition MNIST data set with SVM is 100.00% accurate because the handwriting set is a clear figure. The overlap is minimal. Therefore, the SVM recognition can be recognized. When considering the three types of recognition accuracy with the handwriting set, The SVM can provide the most accurate recognition of data. Therefore, in this research, support vector machines (SVM) are used</s>
<s>for character recognition. V. CONCLUSION This papers aims to develop new algorithms for handwriting recognition systems. This study uses a set of alphabetical images, which are commonly used in the present case for character recognition. The three sets of data are Bangla Latin and MNIST, which are popular handwriting characters. Recognition algorithm Based on the results of this handwriting recognition algorithm. Researchers have designed and developed an algorithm based on the principle of digital image processing to provide image data for processing. Various images used in this experiment are divided into series. The first set was the Bangla Latin MNIST font set in a test to measure the effectiveness of this research. Note that the test image is a well-formed font between the text and the background. And the background of the picture is white. Then, the extraction process was performed to determine the density of pixels according to the principle of image processing. Genetic algorithms were used to analyze the extraction of handwriting features before they were introduced into the recognition. REFERENCES [1] Schomaker, L. R. B., Franke, K., and Bulacu, M. (2007). Using codebooks of fragmented connected-component contours in forensic and historic writer identification. Pattern Recognition Letters, 28(6):719–727. Pattern Recognition in Cultural Heritage and Medical Applications. [2] Bunke, H. and Riesen, K. (2011). Recent advances in graph-based pattern recognition with applications in document analysis. Pattern Recognition, 44(5):1057–1067. [3] Liwicki, M., Bunke, H., Pittman, J., and Knerr, S. (2011). Combining diverse systems for handwritten text line recognition. Machine Vision and Applications, 22(1):39–51. [4] Uchida, S., Ishida, R., Yoshida, A., Cai, W., and Feng, Y. (2012). Character image patterns as big data. In Frontiers in Handwriting Recognition (ICFHR), The 13th International Conference on, pages 479–484. [5] Khakham, P., Chumuang, N., and Ketcham, M., "Isan Dhamma Handwritten Characters Recognition System by Using Functional Trees Classifier," 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, 2015, pp. 606-612. doi: 10.1109/SITIS.2015.68 [6] Chumuang, N. and Mahasak, K., “The Intelligence algorithm for character recognition on palm leaf manuscript” Far East Journal of Mathematical Sciences (FJMS) volume 98, Issue 3, pp. 333-345, October 2015. [7] Chamchong, R., Fung, C., and Wong, K. W. (2010). Comparing binarisation techniques for the processing of ancient manuscripts. In Nakatsu, R., Tosa, N., Naghdy, F., Wong, K. W., and Codognet, P., editors, Cultural Computing, volume 333 of IFIP Advances in Information and Communication Technology, pages 55–64. Springer Berlin Heidelberg. [8] Van der Zant, T., Schomaker, L. R. B., and Haak, K. (2008). Handwrittenword spotting using biologically inspired features. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(11):1945–1957. [9] Manning, Candice A., Timothy J. Mermagen, and Angelique A. Scharine. "Speech recognition performance of listeners with normal hearing, sensorineural hearing loss, and sensorineural hearing loss and bothersome tinnitus when using air and bone conduction communication headsets." The Journal of the Acoustical Society of America 139.4 (2016): 1995-1995. [10] Oliveira, N., Paulo C., and Nelson A. "The impact of microblogging data for stock market prediction: Using Twitter to predict returns,</s>
<s>volatility, trading volume and survey sentiment indices." Expert Systems with Applications 73 (2017): 125-144. [11] Gunnarsson, C., et al. "Health Care Burden in Patients with Adrenal Insufficiency." Journal of the Endocrine Society 1.5 (2017): 512-523. [12] Kumar, D. Ashok, and Dhandapani, S. "A Novel Bank Check Signature Verification Model using Concentric Circle Masking Features and its Performance Analysis over Various Neural Network Training Functions." Indian Journal of Science and Technology 9.31 (2016). [13] Miyamoto, D., and Makiya T. "Image forming apparatus, method for controlling image forming apparatus, and storage medium with deciding unit for orientation of image to be printed." U.S. Patent No. 9,016,688. 28 Apr. 2015. [14] Ghasemi, Jamal, et al. "Radon transform and dynamic programming for the Persian handwritten zip code recognition." International Journal of Intelligent Systems Technologies and Applications 15.4 (2016): 341-352. [15] Chumuang, N. and Ketcham, M., "Intelligent handwriting Thai Signature Recognition System based on artificial neuron network," TENCON 2014 - 2014 IEEE Region 10 Conference, Bangkok, 2014, pp. 1-6. doi: 10.1109/TENCON.2014.7022415 [16] Hnoohom, N., Chumuang, N. and Ketcham, M., "Thai Handwritten Verification System on Documents for the Investigation," 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, 2015, pp. 617-622. [17] Das, N., Das, B., Sarkar, R., Basu, S., Kunda, M., and Nasipuri, M. (2010). Handwritten Bangla basic and compound character recognition using MLP and SVM classifier. Journal of Computing, 2(2):109–115. [18] Fischer, A., Frinken, V., Fornés, A., and Bunke, H. (2011). Transcription alignment of Latin manuscripts using hidden Markov models. In Historical Document Imaging and Processing (HIP), The Workshop on, pages 29–36. [19] G. Cohen, S. Afshar, J. Tapson and A. van Schaik, "EMNIST: Extending MNIST to handwritten letters," 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, 2017, pp. 2921-2926. doi: 10.1109/IJCNN.2017.7966217 [20] Vapnik, V. N. (1998). Statistical Learning Theory. Wiley.</s>
<s>Research ArticleHandwritten Bangla Character Recognition Using theState-of-the-Art Deep Convolutional Neural NetworksMd Zahangir Alom ,1 Paheding Sidike ,2 Mahmudul Hasan,3 Tarek M. Taha,1and Vijayan K. Asari11Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA2Department of Earth and Atmospheric Sciences, Saint Louis University, St. Louis, MO, USA3Comcast Labs, Washington, DC, USACorrespondence should be addressed to Md Zahangir Alom; alomm1@udayton.eduReceived 1 March 2018; Revised 10 May 2018; Accepted 10 July 2018; Published 27 August 2018Academic Editor: Friedhelm SchwenkerCopyright © 2018 Md Zahangir Alom et al. (is is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.In spite of advances in object recognition technology, handwritten Bangla character recognition (HBCR) remains largely unsolveddue to the presence of many ambiguous handwritten characters and excessively cursive Bangla handwritings. Even many ad-vanced existing methods do not lead to satisfactory performance in practice that related to HBCR. In this paper, a set of the state-of-the-art deep convolutional neural networks (DCNNs) is discussed and their performance on the application of HBCR issystematically evaluated. (e main advantage of DCNN approaches is that they can extract discriminative features from raw dataand represent themwith a high degree of invariance to object distortions.(e experimental results show the superior performanceof DCNNmodels compared with the other popular object recognition approaches, which implies DCNN can be a good candidatefor building an automatic HBCR system for practical applications.1. IntroductionAutomatic handwriting character recognition has manyacademic and commercial interests. (e main challenge inhandwritten character recognition is to deal with theenormous variety of handwriting styles by different writers.Furthermore, some complex handwriting scripts comprisedifferent styles for writing words. Depending on the lan-guage, characters are written isolated from each other insome cases (e.g., (ai, Laos, and Japanese). In some othercases, they are cursive and sometimes characters are relatedto each other (e.g., English, Bangladeshi, and Arabic). (ischallenge has been already recognized by many researchersin the field of natural language processing (NLP) [1–3].Handwritten character recognition is more challengingcompared with the printed forms of character due to thefollowing reasons: (1) Handwritten characters written bydifferent writers are not only nonidentical but also vary indifferent aspects such as size and shape; (2) numerousvariations in writing styles of individual character make therecognition task difficult; (3) the similarities of differentcharacter in shapes, the overlaps, and the interconnectionsof the neighbouring characters further complicate thecharacter recognition problem. In summary, a large varietyof writing styles and the complex features of the handwrittencharacters make it a challenge to accurately classifyinghandwritten characters.Bangla is one of the most spoken languages and rankedfifth in the world and spoken by more than 200 millionpeople [4, 5]. It is the national and official language ofBangladesh and the second most popular language in India.In addition, Bangla has a rich heritage. February 21st isannounced as the International Mother Language day byUNESCO to respect the language martyrs for the language inBangladesh in the year of 1952. In terms of Bangla character,it involves a Sanskrit-based script that is inherently differentfrom English- or Latin-based</s>
<s>scripts, and it is relativelydifficult to achieve desired accuracy on the recognition tasks.HindawiComputational Intelligence and NeuroscienceVolume 2018, Article ID 6747098, 13 pageshttps://doi.org/10.1155/2018/6747098mailto:alomm1@udayton.eduhttp://orcid.org/0000-0002-2314-1207http://orcid.org/0000-0003-4712-9672https://doi.org/10.1155/2018/6747098(erefore, developing a recognition system for Banglacharacters is of a great interest [4, 6, 7].In Bangla language, there are 10 digits and 50 charactersincluding vowel and consonant, where some contain ad-ditional sign up and/or below. Moreover, Bangla consists ofmany similar shaped characters. In some cases, a characterdiffers from its similar one with a single dot or mark.Furthermore, Bangla also contains some special characterswith equivalent representation of vowels. (is makes itdifficult to achieve a better performance with simple clas-sification technique as well as hinders to the development ofa reliable handwritten Bangla character recognition (HBCR)system.(ere are many applications of HBCR, such as Banglaoptical character recognition, national ID number recog-nition system, automatic license plate recognition system forvehicle and parking lot management system, post officeautomation, and online banking. Some example images ofthese applications are shown in Figure 1. In this work, weinvestigate the HBCR on Bangla numerals, alphabets, andspecial characters using the state-of-the-art deep convolu-tional neural network (DCNN) [8] models. (e contribu-tions of this paper can be summarized as follows:(i) First time to comprehensive evaluation of the state-of-the-art DCNN models including VGG Net [9],All Convolutional Neural Network (All-Conv) [10],Network in Network (NiN) [11], Residual Network(ResNet) [12], Fractal Network (FractalNet) [13],and Densely connected convolutional Network(DenseNet) [14] on the application of HBCR.(ii) Extensive experiments on HBCR including hand-written digits, alphabets, and special characterrecognition.(iii) (e better recognition accuracy is achieved, to thebest of knowledge, compared with other existingapproaches that reported in the literature.2. Related WorkAlthough some studies on Bangla character recognition havebeen reported in the past years [15–17], there is a few re-markable works available for HBCR. Pal and Chaudhuri [5]proposed a new feature extraction-based method forhandwritten Bangla character recognition where the conceptof water overflow from the reservoir is utilized. Liu and Suen[18] introduced directional gradient features for handwrit-ten Bangla digit classification using ISI Bangla numeraldataset [19], which consists of 19,392 training samples, 4000test samples, and 10 classes (i.e., 0 to 9). Surinta et al. [20]proposed a system using a set of features such as the contourof the handwritten image computed using 8-directionalcodes, distance calculated between hotspots and black pixels,and the intensity of pixel space of small blocks. Each of thesefeatures is separately fed into support vector machine (SVM)[21] classifier, and the final decision is made by the majorityvoting strategy. Das et al. [22] exploited genetic algorithms-based region sampling method for local feature selection andachieved 97% accuracy on HBCR. Xu et al. [23] useda hierarchical Bayesian network which directly takes rawimages as the network inputs and classifies them usinga bottom-up approach. Sparse representation classifier hasalso been applied for Bangla digit recognition [4], where 94%accuracy was reported for handwritten digit recognition. In[6], handwritten Bangla basic and compound characterrecognition using multilayer perceptron (MLP) [24] andSVM classifier was suggested, while handwritten Banglanumeral recognition using MLP was presented in [7] wherethe average recognition rate reached 96.67%.Recently, deep learning-based methods have drawnincreasing attention in handwritten character recognition[25, 26]. Ciregan andMeier [27]</s>
<s>applied multicolumn CNNsto Chinese character classification. Kim and Xie [25] appliedDCNN to Hangul handwritten character recognition andsuperior performance has been achieved against classicalmethods. A deep learning framework such as a CNN-basedHBCR scheme was introduced in [26] where the best rec-ognition accuracy reached at 85.36% on their own dataset. Inthis paper, we, for the first time, introduce the very latestDCNN models, including VGG network, All-Conv, NiN,ResNet, FractalNet, and DenseNet, for handwritten Banglacharacter (e.g., digits, alphabets, and special characters)recognition.3. Deep Neural NetworksDeep neural network (DNN) is an active area in the field ofmachine learning and computer vision [28] and it generallycontains three popular architectures: Deep Belief Net (DBN)[29], Stacked Autoencoder (SAE) [30], and CNN. Due to thecomposition of many layers, DNN methods are more ca-pable for representing the highly varying nonlinear functioncompared with shallow learning approaches [31]. (e lowand middle level of DNN abstract the feature from the inputimage, whereas the high level performs classification op-eration on the extracted features. As a result, an end-to-endframework is formed by integrating with all necessary layerswithin a single network. (erefore, DNN models often leadto better accuracy compared with the other type of machinelearning methods. Recent successful practice of DNN coversa variety of topics such as electricity consumption moni-toring [32], radar signal examination [33], medical imageanalysis [34–36], food security [37–39], and remote sensing[40–42].Among all deep learning approaches, CNN is one of themost popular models and has been providing the state-of-the-art performance on segmentation [43, 44], human actionrecognition [45], image superresolution [46], scene labelling[47], and visual tracking [48].3.1. Convolutional Neural Network (CNN). CNN was ini-tially applied to digit recognition task by LeCun et al. [8].CNN and its variants are gradually adopted to variousapplications [46, 49]. CNN is designed to imitate humanvisual processing, and it has highly optimized structures toprocess 2D images. Furthermore, CNN can effectively learnthe extraction and abstraction of 2D features. In detail, the2 Computational Intelligence and Neurosciencemax-pooling layer of CNN is very effective in absorbingshape variations. Moreover, sparse connection with tiedweights makes CNN involve with fewer parameters thana fully connected network with similar size. Most impor-tantly, CNN is trainable with the gradient-based learningalgorithm and suffers less from the diminishing gradientproblem. Given that the gradient-based algorithm trains thewhole network to minimize an error criterion directly, CNNcan produce highly optimized weights and good general-ization performance [50].(e overall architecture of a CNN, as shown in Figure 2,consists of two main parts: feature extractor and classifier. Inthe feature extraction unit, each layer of the network receivesthe output from its immediate previous layer as inputs andpasses current output as inputs to the immediate next layer,whereas classification part generates the predicted outputsassociated with the input data. (e two basic layers in CNNarchitecture are convolution and pooling [8] layers. Inconvolution layer, each node extracts the features from theinput images by convolution operation on the input nodes.(e max-pooling layer abstracts the feature through averageor maximum operation on input nodes. (e outputs ofl− 1th layer are used as input for the lth layer, where theinputs go through a set of kernels followed by nonlinearfunction</s>
<s>ReLU. Here, f refers to activation function ofReLU. For example, if xl−1i inputs from l− 1th layer, kli,j arekernels of lth layer. (e biases of lth layer are representedwith blj. (en, the convolution operation can be expressed asj � f xl−1i ∗ ki,j􏼐 􏼑 + bj. (1)(e subsampling or pooling layer abstracts the featurethrough average or maximum operation on input nodes. Forexample, if a 2 × 2 down sampling kernel is applied, theneach output dimension will be the half of the correspondinginput dimension for all the inputs. (e pooling operationcan be stated as follows:j � down xl−1i􏼐 􏼑. (2)In contrast to traditional neural networks, CNN extractslow- to high-level features. (e higher-level features can bederived from the propagated feature of the lower-level layers.As the features propagate to the highest layer, the dimensionof the feature is reduced depending on the size of theconvolution and pooling masks. However, the number offeature mapping usually increased for selecting or mappingthe extreme suitable features of the input images for betterclassification accuracy. (e outputs of the last layer of CNNare used as inputs to the fully connected network and ittypically uses a Softmax operation to produce the classifi-cation outputs. For an input sample x, weight vector w, andK distinct linear functions, the Softmax operation can bedefined for the ith class as follows:P(y � i|x) �exp xTwi( 􏼁k�1exp xTwk( 􏼁. (3)However, there are different variants of DCNN archi-tecture that have been proposed over the last few years. (efollowing section discusses six popular DCNN models.3.2. CNN Variants. As far as CNN architecture is con-cerned, it can be observed that there are some important andfundamental components that are used to construct anefficient DCNN architecture. (ese components are con-volution layer, pooling layer, fully connected layer, and(a) (b)(c)Figure 1: Application of handwritten character recognition: (a) national ID number recognition system, (b) post office automation withcode number recognition on envelope, and (c) automatic license plate recognition.Computational Intelligence and Neuroscience 3Softmax layer. (e advanced architecture of this networkconsists of a stack of convolutional layers and max-poolinglayers followed by fully connected and Softmax layer at theend. Noticeable examples of such networks include LeNet[8], AlexNet [49], VGG Net, All-Conv, and NiN. (ere aresome other alternative and advanced architecture that havebeen proposed, including GoogleNet with inception layers[51], ResNet, FractalNet, and DenseNet. However, there aresome topological differences observed in the modern ar-chitectures. Out of many DCNN architectures, AlexNet,VGG Net, GoogleNet, ResNet, DenseNet, and FractalNetcan be viewed as most popular architectures with respect totheir enormous performance on different benchmarks forobject classification. Among these models, some of themodels are designed especially for large-scale imple-mentation such as ResNet and GoogleNet, whereas the VGGNet consists of a general architecture. On the other hand,FractalNet is an alternative of ResNet. In contrast, Dense-Net’s architecture is unique in terms of unit connectivitywhere every layer to all subsequent layers is directly con-nected. In this paper, we provide a review and comparativestudy of All-Conv, NiN, VGG-16, ResNet, FractalNet, andDenseNet for Bangla character recognition. (e basicoverview of these architectures is given in the followingsection.3.2.1. VGG-16. (e visual geometry</s>
<s>group (VGG) was therunner up of the ImageNet Large Scale Visual RecognitionCompetition (ILSVRC) in 2014 [52]. In this architecture,two convolutional layers are used consecutively with a rec-tified linear unit (ReLU) [53] activation function followed bysingle max-pooling layer, several fully connected layers withReLU and Softmax as the final layer. (ere are three typesof VGG Net based on the architecture. (ese threenetworks contain 11, 16, and 19 layers and named as VGG-11, VGG-16, and VGG-19, respectively. (e basic structurefor VGG-11 architecture contains eight convolution layers,one max-pooling layer, and three fully connected (FC) layersfollowed by single Softmax layer.(e configuration of VGG-16 is as follows: the number of convolutions and max-pooling layers: 13, max-pooling layer: 1, FC layers: 3, andSoftmax layer: 1. Total weights is 138 million. (e VGG-19consisted of 16 convolutional layers, one max-pooling layer,3 FC layers followed by a Softmax layer. (e basic buildingblocks of VGG architecture is shown in Figure 3. In thisimplementation, we have used VGG-16 network with lessnumber of feature maps in convolutional layers comparedwith the standard VGG-16 network.3.2.2. All Convolutional Network (All-Conv). (e layerspecification of All-Conv is given in Figure 4. (e basicarchitecture is composed with two convolutional layersfollowed by a max-pooling layer. Instead of using fullyconnected layer, global average pooling (GAP) [11] with thedimension of 6 × 6 is used. Finally, the Softmax layer is usedfor classification. (e output dimension is assigned based onthe number of classes.3.2.3. Network in Network (NiN). (is model is quite dif-ferent compared with the aforementioned DCNN modelsdue to the following properties [11]:(i) It uses multilayer convolution where convolution isperformed with 1 × 1 filters.(ii) It uses GAP instead of fully connected layer.(e concept of using 1 × 1 convolution helps to increasethe depth of the network. (e GAP significantly changes thenetwork structure, which is used nowadays very often asa replacement of fully connected layers. (e GAP on a largefeature map is used to generate a final low-dimensionalfeature vector instead of reducing the feature map to a smallsize and then flattening the feature vector.Feature extractionClassificationInput32×32 Feature maps32@28×28 Feature maps32@14×14 Feature maps64@10×10 Feature maps64@5×5 Convolution Convolution Max-pooling Max-pooling Figure 2: Basic CNN architecture for digit recognition.4 Computational Intelligence and Neuroscience3.2.4. Residual Network (ResNet). ResNet architecture be-comes very popular in computer vision community. (eResNet variants have been experimented with differentnumber of layers as follows: number of convolution layers:49 (34, 152, and 1202 layers for other versions of ResNet),number of fully connected layers: 1, weights: 25.5M. (ebasic block diagram of ResNet architecture is shown inFigure 5. If the input of the residual block is xl−1, the output ofthis block is xl. After performing operations (e.g., convolutionwith different size of filters, batch normalization (BN) [54]followed by a activation function such as ReLU) on xl−1, theoutput F(xl−1) is produced. (e final output of the residualunit is defined asxl � F xl−1( 􏼁 + xl−1. (4)(e Residual Network consists of several basic residualunits.(e different residual units are proposed with differenttypes of layers. However, the operations between the residualunits vary depending on the architectures that are explainedin [12].3.2.5. FractalNet. (e</s>
<s>FractalNet architecture is an ad-vanced and alternative one of ResNet, which is very efficientfor designing very large network with shallow subnetworks,but shorter paths for the propagation of gradient duringtraining [13]. (is concept is based on drop path which isanother regularization for large network. As a result, thisconcept helps to enforce speed versus accuracy tradeoff. (ebasic block diagram of FractalNet is shown in Figure 6. Herex is the actual inputs of FractalNet, and z and f(z) are theinputs and outputs of Fractal block, respectively.3.2.6. Densely Connected Network (DenseNet). DenseNet isdensely connected CNN where each layer is connected to allprevious layers [14]. (erefore, it forms very dense con-nectivity between the layers and so it is called DenseNet. (eDenseNet consists of several dense blocks, and the layerbetween two adjacent blocks is called transition layers. (econceptual diagram of the dense block is shown in Figure 7.According to the figure, the lth layer receives all the featuremaps x0, x1, x2, . . . , xl−1 from the previous layers asinput, which is expressed byxl � Hl x0, x1, x2, . . . , xl−1􏼂 􏼃( 􏼁, (5). a. a. a. a. a. alin. . . . . FC FC FCftmFigure 3: Basic architecture of VGG Net: convolution (Conv) and FC for fully connected layers and Softmax layer at the end.Input 32 × 323 × 3 Conv., 128, ReLU(i)(i)(i)(ii)(ii) 3 × 3 Conv., 128, ReLUConvolution3 × 3 max-pooling stride 2(i) 3 × 3 max-pooling stride 2Pooling3 × 3 Conv., 256, ReLU3 × 3 Conv., 256, ReLU(i)(ii)3 × 3 Conv., 512, ReLU3 × 3 Conv., 512, ReLUConvolutionPoolingConvolution6 × 6 global average poolingSo�maxFigure 4: All convolutional network framework.ReLU activationConvolutionReLU activationConvolutionFigure 5: Basic diagram of residual block.Computational Intelligence and Neuroscience 5where [x0, x1, x2, . . . , xl−1] is the concatenated featuresfrom 0, . . . , l− 1 layers and Hl(·) is a single tensor.DenseNet performs three consecutive operations, BN, fol-lowed by ReLU and a 3 × 3 convolution. In the transitionblock, 1 × 1 convolutional operations are performed withBN followed by 2 × 2 average pooling layer. (is new ar-chitecture has achieved state-of-the-art accuracy for objectrecognition on the five different competitive benchmarks.3.2.7. Network Parameters. (e number of network pa-rameters is a very important criterion to assess the com-plexity of the architecture. (e number of parameters can beused tomake comparison between different architectures. Atfirst, the dimension of the output feature map can becomputed asM �N−F+ 1, (6)where N denotes the dimension of input feature maps, Frefers to the dimension of filters or receptive field, S rep-resents stride in the convolution, and M is the dimension ofoutput feature maps. (e number of parameters (withoutbias) for a single layer is obtained byPl � F × F × FMl−1( 􏼁 × FMl, (7)where Pl represents the total number of parameters in the lthlayer, FMl is the total number of output feature maps of lthlayer, and FMl−1 is the total number of feature maps in the(l− 1)th layer. For example, let a 32 × 32 dimensional (N)image be an input. (e</s>
<s>size of the filter (F) is 5 × 5 and stride(S) is 1 for convolutional layer.(e output dimension (M) ofthe convolutional layer is 28 × 28 which is calculatedaccording to (6). For better illustration, a summary of pa-rameters used in All-Conv architecture is shown in Table 1.Note that the number of trainable parameters is zero in thepooling layer.4. Results and Discussion(e entire experiment is performed on desktop computerwith Intel® Core-I7 CPU@3.33GHz, 56.00GB memory,and Keras with (eano on the backend on Linux envi-ronment. We evaluate the state-of-the-art DCNNmodels onthree datasets from CMATERdb (available at: https://code.google.com/archive/p/cmaterdb/) containing Bangla hand-written digits, alphabets, and special character recognition.ts BN-ReLU-Conv.BN-ReLU-Conv.BN-ReLU-Conv. TransactionlayerH1 x1x0 H2 x2 H3 x3Figure 7: A 4-layer dense block with growth rate of k� 3. Each of the layers takes all of the preceding feature maps as input.Convolution layerJoining layerFractal blockPooling layerf2 (z)Figure 6: An example of FractalNet architecture.6 Computational Intelligence and Neurosciencehttps://code.google.com/archive/p/cmaterdb/https://code.google.com/archive/p/cmaterdb/(e statistics of three datasets used in this paper aresummarized in Table 2. For convenience, we named thedatasets as Digit-10, Alphabet-50, and SpecialChar-13, re-spectively. All images are rescaled to 32 × 32 pixels in ourexperiment.4.1.BanglaHandwrittenDigitDataset. (e standard samplesof the numeral with respective Arabic numerals are shown inFigure 8. (e performance of both DBN and CNN isevaluated on a Bangla handwritten benchmark dataset calledCMATERdb 3.1.1 [22]. (is dataset contains 6,000 images ofunconstrained handwritten isolated Bangla numerals. Eachdigit has 600 images that are rescaled to 32 × 32 pixels. Somesample images in the database are shown in Figure 9. Visualinspection depicts that there is no visible noise. However,variability in writing style is quite high due to user de-pendency. In our experiments, the dataset is split intoa training set and a test set for the evaluation of differentDCNN models. (e training set consists of 4,000 images(400 randomly selected images of each digit). (e rest of the2,000 images are used for testing.Figure 10 shows the training loss of all DCNN modelsduring 250 epochs. It can be observed that FractalNet andDenseNet converge faster compared with other networks,and worst convergence is obtained to be for the All-ConvNetwork. (e validation accuracy is shown in Figure 11,where DenseNet and FractalNet show better recognitionaccuracy among all DCNN models. Finally, the testing ac-curacy of all the DCNNmodels is shown in Figure 12. Fromthe result, it can be clearly seen that DenseNet provides thebest recognition accuracy compared with other networks.4.2. Bangla Handwritten Alphabet-50. In our implementa-tion, the basic fifty alphabets including 11 vowels and 39consonants are considered. (e samples of 39-consonantand 11-vowel characters are shown in Figures 13(a) and13(b), respectively. (e Alphabet-50 dataset contains 15,000samples, where 12,000 are used for training and theremaining 3,000 samples are used for testing. Since thedataset contains samples with different dimensions, werescale all input images to 32 × 32 pixels for better fitting tothe convolutional operation. Some randomly selectedsamples from this database are shown in Figure 14.Table 1: Parameter specification in All-Conv model.Layers Operations FeaturemapsSize offeaturemapsSize ofkernelsNumber ofparametersInputs 32× 32× 3C1 Convolution 128 30× 30 3× 3 3,456C2 Convolution 128 28× 28 3× 3 147,456Max-pooling 128 14×14 2×</s>
<s>2 N/AC3 Convolution 256 12×12 3× 3 294,912C4 Convolution 256 10×10 3× 3 589,824Max-pooling 256 5× 5 2× 2 N/AC5 Convolution 512 3× 3 3× 3 1,179,648C6 Convolution 512 3× 3 1× 1 262,144GAP1 GAP 512 3× 3 N/A N/AOutputs Softmax 10 1x1 N/A 5,120Table 2: Statistics of the database used in our experiment.Dataset # trainingsamples# testingsamplesTotalsamplesNumberof classesDigit-10 4000 2000 6000 10Alphabet-50 12,000 3,000 15,000 50SpecialChar-13 2196 935 2231 130 1 2 3 4 5 6 7 8 9Figure 8: First row shows the Bangla actual digits and second rowshows the corresponding Arabic numerals.Figure 9: Sample handwritten Bangla numeral images fromCMATERdb 3.1.1 database, including digits from 1 to 10.1 51 101 151 201Number of epochsAll-ConvResNetFractalNet NiNDenseNetVGG-16Figure 10: Training loss of different architectures for Banglahandwritten 1–10 digits.Computational Intelligence and Neuroscience 7(e training loss for different DCNN models is shown inFigure 15. It is clear that the DenseNet shows the bestconvergence compared with the other DCNN approaches.Similar to the previous experiment, All-Conv shows the worstconvergence behavior. In addition, an unexpected conver-gence behavior is observed in the case of NiN model.However, all DCNN models tend to converge after 200epochs. (e corresponding validation accuracy on Alphabet-50 is shown in Figure 16. DenseNet again shows superiorvalidation accuracy compared with other DCNN approaches.Figure 17 shows the testing results on handwrittenAlphabet-50. (e DenseNet shows the best testing accuracywith a recognition rate of 98.31%. On the other hand, the All-Conv Net provides around 94.31% testing accuracy, which isthe lowest testing accuracy among all the DCNN models.4.3. Bangla Handwritten Special Characters. (ere are sev-eral special characters (SpecialChar-13) which are equivalentto representations of vowels that are combined with con-sonants for making meaningful words. In our evaluation, weuse 13 special characters which are for 11 vowels and twoadditional special characters. Some samples of Bangla specialcharacters are shown in Figure 18. It can be seen that thequality of the samples is poor, and significant variation in thesame symbols makes this recognition task even difficult.(e training loss and validation accuracy for SpecialChar-13 are shown in Figures 19 and 20, respectively. From theseresults, it can be seen that DenseNet provides better per-formance with lower loss and with the highest validationaccuracy among all DCNN models. Figure 21 shows thetesting accuracy of DCNN model for SpecialChar-13 dataset.It is observed from Figure 21 that DenseNet shows the highesttesting accuracy with lowest training loss and it convergesvery fast. On the other hand, VGG-19 network showspromising recognition accuracy as well.4.4. Performance Comparison. (e testing performance iscompared to several existing methods. (e results arepresented in Table 3. (e experimental results show that themodern DCNNmodels including DenseNet, FractalNet, andResNet provide better testing accuracy against the otherdeep learning approaches and the previously proposedclassical methods. In general, the DenseNet provides 99.13%testing accuracy for handwritten digit recognition, which isthe best accuracy that has been publicly reported to the bestour knowledge. In case of a 50-alphabet recognition,DenseNet yields 98.31% recognition accuracy, which is al-most 2.5% better than the method in [55]. As far as we know,this is the highest accuracy for handwritten Bangla 50-alphabet recognition.</s>
<s>In addition, on 13 special characterrecognition task, DCNNs show promising recognition ac-curacy, especially DenseNet achieves the best accuracywhich is 98.18%.0.10.20.30.40.50.60.70.80.91 51 101 151 201Number of epochsAll-ConvResNetVGG-16 NiNDenseNetFractalNetFigure 11: Validation accuracy of different architectures for Banglahandwritten 1–10 digits.8 971 98stiVGG NetAll-ConvNiNResNetFractalNetDenseNetFigure 12: Testing accuracy for Bangla handwritten digitrecognition.(a)(b)Figure 13: Example images of handwritten characters: (a) Banglaconsonant characters and (b) vowels.8 Computational Intelligence and Neuroscience4.5. Parameter Evaluation. For impartial comparison, wehave trained and tested the networks with the optimized samenumber of parameters as in the references. Table 4 shows thenumber of parameters used for different networks for 50-alphabet recognition. (e number of network parameters fordigits and special character recognition was the same exceptthe number of neurons in the classification layer.4.6. Computation Time. We also calculate computationalcost for all methods, although the computation time dependson the complexity of the architecture. Table 5 presents thecomputational time per epoch (in second) during training ofall the networks for Digit-10, Alphabet-50, and SpecialChar-13 recognition task. From Table 5, it can be seen thatDenseNet takes the longest time during training due to itsdense structure but yields the best accuracy.Figure 14: Randomly selected handwritten characters of Bangla alphabets from Bangla handwritten Alphabet-50 dataset.1 51 101 151 201Number of epochsAll-ConvVGGDenseNetResNetNiNFractalNetFigure 15: Training loss of different DCNN models for Banglahandwritten Alphabet-50.0.20.40.60.81 51 101 151 201Number of epochsResNet All-ConvVGG NiNDenseNet FractalNetFigure 16: (e validation accuracy of different architectures forBangla handwritten Alphabet-50.Computational Intelligence and Neuroscience 93 97stiVGG NetAll-ConvNiNResNetFractalNetDenseNetFigure 17: Testing accuracy for handwritten 50-alphabet recognition using different DCNN techniques.Figure 18: Randomly selected images of special character from the dataset.1 51 101 151 201Number of epochsAll-ConvResNetFractalNetNiNDenseNetVGG-19Figure 19: Training loss of different architectures for Bangla 13 special characters (SpecialChar-13).10 Computational Intelligence and Neuroscience5. ConclusionsIn this research, we investigated the performance of severalpopular deep convolutional neural networks (DCNNs) forhandwritten Bangla character (e.g., digits, alphabets, andspecial characters) recognition. (e experimental resultsindicated that DenseNet is the best performer in classifyingBangla digits, alphabets, and special characters. Specifically,we achieved recognition rate of 99.13% for handwrittenBangla digits, 98.31% for handwritten Bangla alphabet, and98.18% for special character recognition using DenseNet. Tothe best of knowledge, these are the best recognition resultson the CMATERdb dataset. In future, some fusion-basedDCNN models, such as Inception Recurrent ConvolutionalNeural Network (IRCNN) [47], will be explored and de-veloped for handwritten Bangla character recognition.0.10.20.30.40.50.60.70.80.91 51 101 151 201EpochsAll-ConvNiNResNetFractalNetDenseNetVGGFigure 20: Validation accuracy of different architectures for Bangla 13 special characters (SpecialChar-13).5 97stiVGG NetAll-ConvNiNResNetFractalNetDenseNetFigure 21: Testing accuracy of different architectures for Bangla 13special characters (SpecialChar-13).Table 4: Number of parameter comparison.Models Number of parametersVGG-16 [9] ∼ 8.43MAll-Conv Net [10] ∼ 2.26MNiN [11] ∼ 2.81MResNet [12] ∼ 5.63MFractalNet [13] ∼ 7.84MDenseNet [14] ∼ 4.25MTable 5: Computational time (in sec) per epoch for differentDCNN models on Digit-10, Alphabet-50, and SpecialChar-13.Models Digit-10 Alphabet-50 SpecialChar-13VGG-16 [9] 32 83 15All-Conv Net [10] 7 23 4NiN [11] 9 27 5ResNet [12] 64 154 34FractalNet [13] 32 102 18DenseNet [14] 95 210 58Table 3: (e testing accuracy of VGG-16 Network, All-ConvNetwork, NiN, ResNet, FractalNet, and DenseNet on Digit-10,Alphabet-50, and SpecialChar-13 and comparison against otherexisting methods.Types Methods Digit-10(%)Alphabet-50(%)SpecialChar-13 (%)ExistingapproachesMLP [7] 96.67</s>
<s>— —MPCA+QTLR [56] 98.55 — —GA [22] 97.00 — —LeNet+DBN [57] 98.64 — —VGGNet [9] 97.57 97.56 96.15DCNNAll-Conv[10] 97.08 94.31 95.58NiN [11] 97.36 96.73 97.24ResNet [12] 98.51 97.33 97.64FractalNet[13] 98.92 97.87 97.98DenseNet[14] 99.13 98.31 98.18Computational Intelligence and Neuroscience 11Data Availability(e data used to support the findings of this study areavailable at https://code.google.com/archive/p/cmaterdb/.Conflicts of Interest(e authors declare that they have no conflicts of interest.References[1] D. C. Cireşan, U. Meier, L. M. Gambardella, andJ. Schmidhuber, “Deep, big, simple neural nets for hand-written digit recognition,” Neural Computation, vol. 22,no. 12, pp. 3207–3220, 2010.[2] U. Meier, D. C. Ciresan, L. M. Gambardella, andJ. Schmidhuber, “Better digit recognition with a committee ofsimple neural nets,” in Proceedings of International Conferenceon Document Analysis and Recognition (ICDAR), pp. 1250–1254, Beijing, China, September 2011.[3] W. Song, S. Uchida, and M. Liwicki, “Comparative study ofpart-based handwritten character recognition methods,” inProceedings of International Conference on DocumentAnalysis and Recognition (ICDAR), pp. 814–818, Beijing,China, September 2011.[4] H. A. Khan, A. Al Helal, and K. I. Ahmed, “HandwrittenBangla digit recognition using sparse representation classi-fier,” in Proceedings of 2014 International Conference on In-formatics, Electronics and Vision (ICIEV), pp. 1–6, IEEE,Dhaka, Bangladesh, May 2014.[5] U. Pal and B. Chaudhuri, “Automatic recognition of un-constrained off-line Bangla handwritten numerals,” in Pro-ceedings of Advances in Multimodal Interfaces–ICMI 2000,pp. 371–378, Springer, Beijing, China, October 2000.[6] N. Das, B. Das, R. Sarkar, S. Basu, M. Kundu, andM. Nasipuri,“Handwritten Bangla basic and compound character recog-nition using MLP and SVM classifier,” 2010, http://arxiv.org/abs/1002.4040.[7] S. Basu, N. Das, R. Sarkar, M. Kundu, M. Nasipuri, andD. K. Basu, “An MLP based approach for recognition ofhandwritten Bangla numerals,” 2012, http://arxiv.org/abs/1203.0876.[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedingsof the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.[9] K. Simonyan and A. Zisserman, “Very deep convolutionalnetworks for large-scale image recognition,” 2014, http://arxiv.org/abs/1409.1556.[10] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller,“Striving for simplicity: (e all convolutional net,” 2014, http://arxiv.org/abs/1412.6806.[11] M. Lin, Q. Chen, and S. Yan, “Network in network,” 2013,http://arxiv.org/abs/1312.4400.[12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learningfor image recognition,” in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition, pp. 770–778, LasVegas, NV, USA, June-July 2016.[13] G. Larsson, M. Maire, and G. Shakhnarovich, “Fractalnet:ultra-deep neural networks without residuals,” 2016, http://arxiv.org/abs/1605.07648.[14] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten,“Densely connected convolutional networks,” in Proceedingsof the IEEE Conference on Computer Vision and PatternRecognition, vol. 1, no. 2, p. 3, Honolulu, HI, USA, July 2017.[15] B. Chaudhuri and U. Pal, “A complete printed Bangla OCRsystem,” Pattern Recognition, vol. 31, no. 5, pp. 531–549, 1998.[16] U. Pal, On the development of an optical character recognition(OCR) system for printed Bangla script, Ph.D. dissertation,Indian Statistical Institute, Kolkata, India, 1997.[17] U. Pal and B. Chaudhuri, “Indian script character recognition:a survey,” Pattern Recognition, vol. 37, no. 9, pp. 1887–1899,2004.[18] C.-L. Liu and C. Y. Suen, “A new benchmark on the rec-ognition of handwritten Bangla and Farsi numeral charac-ters,” Pattern Recognition, vol. 42, no. 12, pp. 3287–3295,</s>
<s>2009.[19] B. Chaudhuri, “A complete handwritten numeral database ofBangla–a major Indic script,”in Proceedings of Tenth In-ternational Workshop on Frontiers in Handwriting Recogni-tion, Suvisoft, Baule, France, October 2006.[20] O. Surinta, L. Schomaker, and M. Wiering, “A comparison offeature and pixel-based methods for recognizing handwrittenBangla digits,” in Proceedings of 12th International Conferenceon Document Analysis and Recognition (ICDAR), pp. 165–169,IEEE, Buffalo, NY, USA, 2013.[21] C. Cortes and V. Vapnik, “Support-vector networks,” Ma-chine Learning, vol. 20, no. 3, pp. 273–297, 1995.[22] N. Das, R. Sarkar, S. Basu, M. Kundu, M. Nasipuri, andD. K. Basu, “A genetic algorithm based region sampling forselection of local features in handwritten digit recognitionapplication,” Applied Soft Computing, vol. 12, no. 5,pp. 1592–1606, 2012.[23] J.-W. Xu, J. Xu, and Y. Lu, “Handwritten Bangla digit rec-ognition using hierarchical Bayesian network,” in Proceedingsof 3rd International Conference on Intelligent System andKnowledge Engineering, vol. 1, pp. 1096–1099, IEEE, Xiamen,China, November 2008.[24] D. E. Rumelhart, J. L. McClelland, P. R. Group et al., ParallelDistributed Processing, MIT Press, Vol. 1, MIT Press,Cambridge, MA, USA, 1987.[25] I.-J. Kim and X. Xie, “Handwritten Hangul recognition usingdeep convolutional neural networks,” International Journal onDocument Analysis and Recognition, vol. 18, no. 1, pp. 1–13,2015.[26] M. M. Rahman, M. Akhand, S. Islam, P. C. Shill, andM. H. Rahman, “Bangla handwritten character recognitionusing convolutional neural network,” International Journal ofImage, Graphics and Signal Processing, vol. 7, no. 8, pp. 42–49,2015.[27] D. Ciregan and U. Meier, “Multi-column deep neural net-works for offline handwritten Chinese character classifica-tion,” in Proceedings of 2015 International Joint Conference onNeural Networks (IJCNN), pp. 1–6, IEEE, Killarney, Ireland,July 2015.[28] A. Voulodimos, N. Doulamis, A. Doulamis, andE. Protopapadakis, “Deep learning for computer vision:a brief review,” Computational Intelligence and Neurosci-ence, vol. 2018, Article ID 7068349, 13 pages, 2018.[29] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learningalgorithm for deep belief nets,” Neural Computation, vol. 18,no. 7, pp. 1527–1554, 2006.[30] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol,“Extracting and composing robust features with denoisingautoencoders,” in Proceedings of the 25th InternationalConference on Machine Learning, pp. 1096–1103, ACM,Helsinki, Finland, July 2008.[31] Y. Bengio et al., “Learning deep architectures for AI,”Foundations and Trends® in Machine Learning, vol. 2, no. 1,pp. 1–127, 2009.12 Computational Intelligence and Neurosciencehttps://code.google.com/archive/p/cmaterdb/http://arxiv.org/abs/1002.4040http://arxiv.org/abs/1002.4040http://arxiv.org/abs/1203.0876http://arxiv.org/abs/1203.0876http://arxiv.org/abs/1409.1556http://arxiv.org/abs/1409.1556http://arxiv.org/abs/1412.6806http://arxiv.org/abs/1412.6806http://arxiv.org/abs/1312.4400http://arxiv.org/abs/1605.07648http://arxiv.org/abs/1605.07648[32] J. Kim, T.-T.-H. Le, and H. Kim, “Nonintrusive load moni-toring based on advanced deep learning and novel signature,”Computational Intelligence and Neuroscience, vol. 2017,Article ID 4216281, 22 pages, 2017.[33] E. Protopapadakis, A. Voulodimos, A. Doulamis, N. Doulamis,D. Dres, and M. Bimpas, “Stacked autoencoders for outlierdetection in over-the-horizon radar signals,” ComputationalIntelligence and Neuroscience, vol. 2017, Article ID 5891417,11 pages, 2017.[34] H. Greenspan, B. van Ginneken, and R. M. Summers, “Guesteditorial deep learning in medical imaging: overview andfuture promise of an exciting new technique,” IEEE Trans-actions on Medical Imaging, vol. 35, no. 5, pp. 1153–1159,2016.[35] D. Shen, G. Wu, and H.-I. Suk, “Deep learning in medicalimage analysis,” Annual Review of Biomedical Engineering,vol. 19, pp. 221–248, 2017.[36] H.-C. Shin, H. R. Roth, M. Gao et al., “Deep convolutionalneural networks for computer-aided detection: CNN archi-tectures, dataset characteristics and transfer learning,” IEEETransactions on Medical Imaging, vol.</s>
<s>35, no. 5, pp. 1285–1298, 2016.[37] S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, andD. Stefanovic, “Deep neural networks based recognition ofplant diseases by leaf image classification,” ComputationalIntelligence and Neuroscience, vol. 2016, Article ID 3289801,11 pages, 2016.[38] S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using deeplearning for image-based plant disease detection,” Frontiers inPlant Science, vol. 7, p. 1419, 2016.[39] G. Wang, Y. Sun, and J. Wang, “Automatic image-based plantdisease severity estimation using deep learning,” Computa-tional Intelligence and Neuroscience, vol. 2017, Article ID2917536, 8 pages, 2017.[40] L. Zhang, L. Zhang, and B. Du, “Deep learning for remotesensing data: a technical tutorial on the state of the art,” IEEEGeoscience and Remote Sensing Magazine, vol. 4, no. 2,pp. 22–40, 2016.[41] A. Romero, C. Gatta, and G. Camps-Valls, “Unsuperviseddeep feature extraction for remote sensing image classifica-tion,” IEEE Transactions on Geoscience and Remote Sensing,vol. 54, no. 3, pp. 1349–1362, 2016.[42] K. Nogueira, O. A. Penatti, and J. A. dos Santos, “Towardsbetter exploiting convolutional neural networks for remotesensing scene classification,” Pattern Recognition, vol. 61,pp. 539–556, 2017.[43] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutionalnetworks for semantic segmentation,” in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognition,pp. 3431–3440, Boston, MA, USA, June 2015.[44] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: a deepconvolutional encoder-decoder architecture for image seg-mentation,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.[45] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neuralnetworks for human action recognition,” IEEE Transactionson Pattern Analysis and Machine Intelligence, vol. 35, no. 1,pp. 221–231, 2013.[46] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 38,no. 2, pp. 295–307, 2016.[47] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learninghierarchical features for scene labeling,” IEEE Transactions onPattern Analysis and Machine Intelligence, vol. 35, no. 8,pp. 1915–1929, 2013.[48] K. Zhang, Q. Liu, Y. Wu, and M.-H. Yang, “Robust visualtracking via convolutional networks without training,” IEEETransactions on Image Processing, vol. 25, no. 4, pp. 1779–1792, 2016.[49] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenetclassification with deep convolutional neural networks,” inProceedings of Advances in Neural Information ProcessingSystems, pp. 1097–1105, Lake Tahoe, NV, USA, December2012.[50] M. Matsugu, K. Mori, Y. Mitari, and Y. Kaneda, “Subjectindependent facial expression recognition with robust facedetection using a convolutional neural network,” NeuralNetworks, vol. 16, no. 5-6, pp. 555–559, 2003.[51] C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with con-volutions,” in Proceedings of 2015 IEEE Conference onComputer Vision and Pattern Recognition (CVPR), pp. 1–9,Boston, MA, USA, June 2015.[52] O. Russakovsky, J. Deng, H. Su et al., “Imagenet large scalevisual recognition challenge,” International Journal of Com-puter Vision, vol. 115, no. 3, pp. 211–252, 2015.[53] V. Nair and G. E. Hinton, “Rectified linear units improverestricted Boltzmann machines,” in Proceedings of the 27thInternational Conference on Machine Learning (ICML-10),pp. 807–814, Haifa, Israel, June 2010.[54] S. Ioffe and C. Szegedy, “Batch normalization: acceleratingdeep network training by reducing internal covariate shift,” inProceedings of International</s>
<s>Conference on Machine Learning,pp. 448–456, Lille, France, July 2015.[55] U. Bhattacharya, M. Shridhar, S. K. Parui, P. Sen, andB. Chaudhuri, “Offline recognition of handwritten Banglacharacters: an efficient two-stage approach,” Pattern Analysisand Applications, vol. 15, no. 4, pp. 445–458, 2012.[56] N. Das, J. M. Reddy, R. Sarkar et al., “A statistical–topologicalfeature combination for recognition of handwritten nu-merals,”Applied Soft Computing, vol. 12, no. 8, pp. 2486–2495,2012.[57] M. Z. Alom, P. Sidike, T. M. Taha, and V. K. Asari,“Handwritten Bangla digit recognition using deep learning,”2017, http://arxiv.org/abs/1705.02680.Computational Intelligence and Neuroscience 13http://arxiv.org/abs/1705.02680Computer Games TechnologyInternational Journal ofHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Journal ofEngineeringVolume 2018Advances inFuzzySystemsHindawiwww.hindawi.comVolume 2018International Journal ofReconfigurableComputingHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Volume 2018 Applied Computational Intelligence and Soft Computing Advances in Artificial IntelligenceHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Volume 2018Civil EngineeringAdvances inHindawiwww.hindawi.com Volume 2018Electrical and Computer EngineeringJournal ofJournal ofComputer Networks and CommunicationsHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Volume 2018 Advances in Multimedia International Journal of Biomedical ImagingHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Volume 2018Engineering MathematicsInternational Journal ofRoboticsJournal ofHindawiwww.hindawi.com Volume 2018Hindawiwww.hindawi.com Volume 2018Computational Intelligence and NeuroscienceHindawiwww.hindawi.com Volume 2018Mathematical Problems in EngineeringModelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.comThe Scientific World JournalVolume 2018Hindawiwww.hindawi.com Volume 2018Human-ComputerInteractionAdvances inHindawiwww.hindawi.com Volume 2018 Scienti�c ProgrammingSubmit your manuscripts atwww.hindawi.comhttps://www.hindawi.com/journals/ijcgt/https://www.hindawi.com/journals/je/https://www.hindawi.com/journals/afs/https://www.hindawi.com/journals/ijrc/https://www.hindawi.com/journals/acisc/https://www.hindawi.com/journals/aai/https://www.hindawi.com/journals/ace/https://www.hindawi.com/journals/jece/https://www.hindawi.com/journals/jcnc/https://www.hindawi.com/journals/am/https://www.hindawi.com/journals/ijbi/https://www.hindawi.com/journals/ijem/https://www.hindawi.com/journals/jr/https://www.hindawi.com/journals/cin/https://www.hindawi.com/journals/mpe/https://www.hindawi.com/journals/mse/https://www.hindawi.com/journals/tswj/https://www.hindawi.com/journals/ahci/https://www.hindawi.com/journals/sp/https://www.hindawi.com/https://www.hindawi.com/</s>
<s>Implementation of a reading device for bengalispeaking visually handicapped peopleMd. Mahade Sarkar, Shuvasis Datta, Md. Mahedi HassanDepartment of Electrical and Electronic EngineeringChittagong University of Engineering and Technology,Chittagong - 4349 , Bangladeshmahadesarkareee@gmail.com, shuvasisdatta@gmail.com, m.m.mahedihaasan@gmail.comAbstract—A reading device is a compact hardware setupwith necessary programmes coded in it which read out printeddocuments like a human reader. People having eyesight problemcan’t read books, papers or any kind of printed reading materials.This problem can be solved simply by taking image of the readingmaterials, extracting words from the image and converting thosewords to sound, so by hearing that text converted sound theycan understand what is written on that paper. A device is imple-mented for Bengali speaking visually handicapped people. Forcharacter recognition tesseract-ocr is used as optical characterrecognition(OCR) engine. Python gTTS module, a text to speechengine is used to convert the words extracted by tesseract-ocr tosound. Whole process is implemented on Raspberry Pi based acompact hardware design. Accuracy of character detection fromthe captured image and words to sound conversion is as highas 85 %. It is to be mentioned that accuracy is calculated aspercentage of correct words to total words in an imageKeywords—Tesseract; OCR; gTTS; Raspberry Pi; linux.I. INTRODUCTIONLarge number of Bengali speaking people from Bangladeshand India, having eye sight problem facing much troublereading printed papers, books in their native language. It’stoo hard for them to study in conventional system like Braille.Braille is a tactile writing system used by people who are blindor visually impaired. In Braille it is traditionally written withembossed paper. Braille is an analog system to study whichneeds time to learn and not a easy way to read. Howeverall materials are not written in Baille system, almost allpapers are printed in this modern age. This device will helpthem reading printed papers and any kind of printed Banglabooks. The device takes the image of the printed documents.From that image characters, words and sentences are extractedusing optical character recognition(OCR) engine and these areformated as a text file. A text to speach(TTS) converter engineis used to convert that text file to a sound file. Sound is playedby omxplayer, a linux based open source sound player. Thewhole process is being implemented in a Raspberry Pi modelB which has great processing power in a compact design.Tocomplete this job, two main part is doing OCR and convertingthe text file to sound. Optical Character Recognition (OCR) isthe process of extracting text from an image. The main purposeof an OCR is to make editable documents like a text file fromimage file. For doing OCR in Bengali language lot of work hasbeen done. Some has developed their own algorithms and somehas developed based on existing OCR. A complete printedBangla OCR system is shown in [1]. Character Segmentationis shown in [2]. Bangla word formation patern is very complex.It has not only vowels and consonants but also compound wordwhich is build using two or more individual character [3].A cluster of characters forms a word using a horizontal linein Bengali which is called ”Mattra”. Identification of Matraregion and overlapping characters for OCR of printed Bengaliscripts are</s>
<s>shown here [4]. Tesseract an open source basedOptical Character Recognizer for Bangla printed documents isshown [5]. Several OCR was proposed but none of those areso efficient as tesseract. For the implementation of this projecttesseract-ocr engine has been used for doing OCR. Tesseract isan open source Optical Character Recognition (OCR) enginewhich is maintained and developed by google. For recognitionpurpose each individual language need pretrainned data ofthe characters. Tesseract has enhanced pretrainned data forBengali language. A Text to Speech (TTS) is a computer basedsystem which is capable of converting computer readabletext into speech. There are two main components creatinga sound such as Natural Language Processing (NLP) andDigital Signal Processing (DSP). Some good amount of workhas been done from diffrent perspective for creating banglaTTS. Text normalization and diphone preparation for BanglaTTS is presented in [6]. Using Epoch Synchronous NonOverlap Add (ESNOLA) for speech generation is describedfor implementation of a Bengali speech synthesizer for mobiledevices in [7]. A Bangla TTS system was developed usingopen source Festival in [8]. A Framework for Bangla Text toSpeech Synthesis has proposed a new framework for BanglaTTS [9]. All these efforts were proposed theoritically butnone of these is implemented as a open source based packagewhich can be used for this project for smooth Bengali voice.eSpeakNG is a compact open source based speech synthesizerfor Linux, Windows and other platforms. Espeak gives a nicevoice quality for english but for Bengali, pronounciation is notaccurate and voice quality is not so good. gTTS is a pythoninterface which is text to speech API, gives excelent voicequality for Bengali language. gTTS operates based on comandline interface with a secure internet connection. AlthoughgTTS works online but it’s operation is very fast and smooth.2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)21 - 23 Dec 2017, Dhaka, Bangladesh461UserTypewriter978-1-5386-2175-2/17/$31.00 ©2017 IEEEII. SYSTEM OVERVIEWThis device is implemented using a Raspberry Pi modelB with a Pi camera. To implement this device main chal-lenges are, extracting characters, words from the image andpronunciation of the words. Process start with capturing theimage of the printed paper. Clear and full page captured imagein essential as image is the raw ingredient for processing.Some preprocessing techniques is applied to make the imagequality better. Detection of bangla words is done by tesseract.Processed image is passed to tsseract engine. Tesseract isan OCR engine which takes image as input and detect thecharacters, words from that image file. Output of tesseract isa text file. This text file contains the words forming sentencessame as on the image. Text file is converted to sound usingpython gTTS module which is based on google text to speechcoversion API. Sound file is saved in working directory inmp3 format. A sound player is used to play the sound file.StartImage captureImage preprocessingTesseractText to soundPlay soundEndFig. 1: Flow chart of the whole processIII. METHODOLOGYImplementation of this device needs several processes.Methods used are:• Hardware designing• Image capture and processing• Doing tesseract• Sound file creating• Sound playingA. Hardware designingWhole process is implmented on a Raspberry Pi modelB which has a 1.2 GHz 64/32-bit quad-core ARM Cortex-A53 CPU and 1 GB LPDDR2</s>
<s>RAM at 900 MHz memory.Raspbian a linux based distribution is being installed asoperating system. A Pi camera is connected to the RaspberryPi to capture the image of the printed papers. Raspberry Piis powered by a charger having good current rating as highas 1A. A loud speaker is connected to Raspberry Pi’s soundport for playing sound. An on-off switch is connected to theGPIO(General Purpose Input / Output) pins to start and offthe device.B. Image capturing and processingImage of the printed document is captured by the cameraconnected to the usb port of Raspberry Pi. It should be keptin mind that the output of tesseract and the sound qualitydepends on the quality of the captured image. A clear noisefree complete image gives better output. But captured imagewill not always be a clear and complete . So some imageprocesing technique is applied to make it a better image beforepassing it to tesseract for OCR. For preparing image theseprocessing steps has been applied:• De-skewing• Despeckle• Line Removal• Smoothing Images• Canny Edge DetectionC. TesseractMain challenge for implementing this project is Banglacharaccter recognition. This is done by tesseract-ocr. Theprocessed image from previous section is passed to tesseractfor getting the editable text file. Tesseract is a optical char-acter recognition engine which maintained under open sourceproject. It is installed in Raspberry Pi using the instructionfrom github source link [10]. The OCR Engine needs apretrained data file to work on Bengali inputs. Tesseract hasalmost all language traineddata file for character recognition.For Bengali it is ’ben.traineddata’, which is placed in tessdatadirectory. This file is consisted of some other files and aconcatenation of those files. When the traineddata file placedin specific location tesseract detects character perfectly andgenerate text file. Tesseract has a level of accuracy in itsengine which is standard. This engine can work very efficientlyon it’s library for accurate matching. In our system, we haveimplemented the library file or traineddata in a very detailedmanner. It is important to mention that we need to uniquelyidentify each and every character in our system so that ifthe input file contains the character, the OCR recognizes it.For convenience, the following is a brief overview of howTesseract works [11]:• Outlines are analysed and stored• Outlines are gathered together as Blobs• Blobs are organized into text lines• Text lines are broken into words2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)21 - 23 Dec 2017, Dhaka, Bangladesh462• First pass of recognition process attempts to recognizeeach word in turn• Satisfactory words passed to adaptive trainer• Lessons learned by adaptive trainer employed in a secondpass, which attempts recognize the words that were notrecognized satisfactorily in the first pass• Fuzzy spaces resolved and text checked for small caps• Digital texts are outputtedDuring these processes, Tesseract uses:• Algorithms for detecting text lines from a skewed page• Algorithms for detecting proportional and non proportional words (a proportional word is a word where allthe letters are the same width)• Algorithms for chopping joined characters and for associating broken characters• Linguistic analysis to identify the most likely wordformed by a cluster of characters• Two character</s>
<s>classifiers: a static classifier, and an adap-tive classifier which employs training data, and whichis better at distinguishing between upper and lower caselettersD. Text to SpeechMain objective of this project is read out the documents.So text file found from tesseracct, need to convert to sound.That is done by gTTS. gTTS (Google Text to Speech) is aPython interface for Google’s Text to Speech API. Createsan mp3 file with the gTTS module or gtts-cli command lineutility. It allows unlimited lengths to be spoken by tokenizinglong sentences where the speech would naturally pause. Soundconversion using gTTS is very fast. gTTS is installed usingproper instruction from github link [12].E. Sound playingSound file found by gTTS is played using OMXplayer, anopen source based sound player. OMXPlayer is a commandlinebased sound player for the Raspberry Pi. It was developed as atested for the XBMC Raspberry PI implementation and is quitehandy to use standalone. OMXPlayer uses the OpenMAX(omx) hardware acceleration interface (API) which is theofficially supported media API on the Raspberry Pi [13].IV. IMPLEMENTATIONBuilding a OCR engine is a lengthy and time consum-ing process which needs large data training and applyingintelligent detection techniques. So using the most effectiveengine tesseract-ocr for character recognition and gTTS fortext to sound conversion is a better option. To implement thisdevice using a Raspberry Pi based compact design peripheraldevices, a camera is connected to pi’s usb, a speaker as asound output is connected to the sound port. A switch isconnected to start the process. As the operating system islinux based, it’s great advantage using comand line interface.The OCR engine and text to speech engine operation comandsare operated in a linux termianl. Text to speech engine is apython module which operate based on google text to speechAPI. So an uninteruptable internet connection is required. Thatis solved by the wifi connection ability of Raspberry Pi 3model B. Network configuration file of Raspbian is configuredfor a specific hotspot that is supplied by a Android phone.To automatecally start the whole process a startup script iswritten. After pi is powered and start switch is pressed onthe startup script executes automatically. Start script is linuxcomandline based comands, thats executes sequentially. Thecomands in start script do these operations step by step: checknetwork connectivity which is supplied by an android phone,capture image, do OCR, convert text file to sound file and playthe sound file. After reading one page of printed document thescript start the same process again. It continues in a loop aslong as the audience is willing to read the book. For stopreading simply pressing the switch the device will be off.Fig. 2: Device implemented using Raspberry PiV. RESULTResult is calculated on the performance of characterrecognition and text to sound conversion ability of the device.Device performance accuracy depends on the quality of imageand on the image resolution. That means higher accuracydepends on better quality of camer, ability of image capturingand image resolution. Device is tested on several banglaprinted paper script. The output of tesseract engine is a textfile. Accuracy of character recognition is calculated basedon the number of correct words found</s>
<s>from the ouput ofthat text file. Any mismatch of words to the original printeddocuments is considered as wrong word. For calculatingaccuracy the device is testing on three printed bangla script.Fig.(3-5) shows the image of three printed Bengali script andtesseracted output text file.2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)21 - 23 Dec 2017, Dhaka, Bangladesh463Fig. 3: First image and tesseracted outputFig. 4: Second image and tesseracted outputFig. 5: Third image and tesseracted outputCorrect words are counted from the text file manually.Counted words from the above three figures are shown in thetable bellow:TABLE I: Words count from each imagePage No Total words Correct words Accuracy in percentage01 38 37 97.3%02 48 44 91.6%03 27 22 81.4%Accuracy is the total number of correct words to the totalnumber of words found in the table .Accuracy in percentage =correct wordstotal words∗ 100103113∗ 100= 91.15%91.15% accuracy if calculated based on three bangla printedpaper. Testing the device on several other script accuracy isabove 85% . The text to sound conversion accuracy is above90%Comparison of right words and wrong words can be easilyunserstood by a bar plot. The figure below shows that for threeprinted parer.Fig. 6: Graphical presentation of right and wrong wordsVI. CONCLUSIONA reading device for bengali speaking visually handicappedpeople is implenetated successfully. This device is imple-mented specially for visualy handicapped people but can beused in other purposes. It is operating smoothly and accuracyis excelent. Total execution time is a little longer but notmore than two minute. It’s beccause of the processing powerof Raspberry pi. It can be expected that with the furtherdevelopment of Raspberry Pi’s processor will reduce theexecution time in near future. Text to speech engine is googletext to speech API based, so a offline text to speech enginedevelopment can give a better flexibility for this device.REFERENCES[1] B. B. CHAUDHURI and U. PAL, ”A complete printed Bangla OCRsystem”,Pattern Recognition, Vol. 31, No. 5, pp. 531549, 1998[2] Shamim Ahmed , Mohammod Abul Kashem , ”Enhancing the CharacterSegmentation Accuracy of Bangla OCR using BPNN”, InternationalJournal of Science and Research (IJSR) ISSN (Online): 2319-7064.[3] M. K. Shukla, T. Patnaik, S. Tiwari, S. K. Singh, ”Script Segmentationof Printed Devnagari and Bengali Languages Document Images OCR”.[4] U. Pal and B. B. Chaudhuri, ”Identification of Matra Region and Over-lapping Characters for OCR of Printed Bengali Scripts”, IntelligentComputing and Information Science Communications in Computer andInformation Science, vol. 135, pp. 606-612, 2011.[5] Md. Abul Hasnat, Muttakinur Rahman, Chowdhury Mumit Khan, ”Anopen source Tesseract based Optical Character Recognizer for Banglascript”, 10th International Conference on Document Analysis andRecognition,pp. 671-675 , 2009[6] M. Masud Rashid , Md. Akter Hussain, M. Shahidur Rahman, TextNormalization and Diphone Preparation for Bangla Speech Synthesis,Journal of Multimedia, 5:6, 2010.[7] S. Mukherjee, Shyamal Kumar Das Mandal, ” A Bengali SpeechSynthesizer on Android OS”, Proceedings of the 1st Workshop onSpeech and Multimodal Interaction in Assistive Environments, pp. 4346,2012.[8] F. Alam, S.M. Murtoza Habib, Mumit Khan, ”Bangla Text to Speechusing Festival”, Conference on Human Language Technology forDevelopment, pp. 154-161, 2011.[9] K. M. Azharul Hasan, Muhammad Hozaifa,Sanjoy Dutta, Rafsan ZaniRabbi, ”A Framework for Bangla</s>
<s>Text to Speech Synthesis”, 16thInternational Conference on Computer and Information Technology(ICCIT),pp. 60 - 64 , 2013[10] https://github.com/tesseract-ocr, Last accessed: August 14, 2017.[11] https://tesseract-ocr.repairfaq.org/downloads/saltcymru document5.pdf, Last accessed: August 14, 2017.[12] https://github.com/pndurette/gTTS, Last accessed: August 14, 2017.[13] https://www.raspberrypi.org/documentation/raspbian/applications/omxplayer.md, Last accessed: August 14, 2017.2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)21 - 23 Dec 2017, Dhaka, Bangladesh464</s>
<s>International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 7, Issue 2, February 2018, ISSN: 2278 – 1323 All Rights Reserved © 2018 IJARCET 155 Abstract— Handwriting recognition has been one of the most interesting research areas for last few years. Many scholars have conducted several researches on offline handwriting recognition but online handwriting is still an open space for innovative research. Though it is true that various research works have been done for online handwritten character recognition of English as well as different Indian scripts, online word recognition of Indian script especially Bengali script / words is an on-going research area. Online handwritten word recognition system consists of many pre-processing steps, out of which skew detection and correction is the most important pre-processing step. Only few techniques have been implemented for skew detection and correction of handwritten words. Those already implemented techniques do not give perfect output and has very high time complexity. In this paper an endeavor is made to introduce an unprecedented and innovative approach of skew detection and correction for online Bengali handwritten words. Here the handwritten word is rotated from -45 degree to + 45 degree to calculate the height of the whole word for each degree of rotation and we consider that particular angle where the height of the word is minimum. If it is observed that the height is minimum and same or almost same (± 1) for more than one angle then the width of the word is calculated and the particular angle, where the width is maximum, will be considered. Then the repetition of the stated steps will be done in the busy zone so that the handwritten words can be skew corrected. 3364 Bengali handwritten words from people of different backgrounds have been tested and have got an outstanding result of 97.05% accuracy. Index Terms—Busy Zone, Height and Width, Online Handwriting, Pre-Processing Steps, Skew Detection I. INTRODUCTION At this age of digitization, every digital device uses touch sensitive surface to take input from user. To give any instruction to computer it is quite difficult to give input by using keyboard in digital device or PDA with small size of screen. So in this purpose if a computer system can detect the handwriting of people then it will be a great invention to give input instruction to any digital computerized device. To implement a proper handwriting recognition system we need to develop some pre-processing steps like noise detection and correction, smoothing, skew detection and correction, normalization, segmentation etc., out of them skew detection and correction is the most important one. If there is a skew or tilt in handwritten word then it is very difficult to perform next pre-processing steps and recognition will also be difficult. There are some existing works on skew detection and correction but they do not give perfect output and have very high time complexity. Here an attempt is made to explore the unprecedented and innovative approach of skew detection and correction for online</s>
<s>Bengali handwritten words. This algorithm can be applied for any Indian as well as foreign languages. Here several tests are done on the algorithm of Bengali handwritten words. The paper is organized as follows- Section II contains Bengali script and online data collection, Section III deals with the related work of Skew Detection and Correction process, Section IV explains the proposed methodology, Section V describes results and discussion, Section VI concludes the research work with future directions. II. BENGALI SCRIPT AND ONLINE DATA COLLECTION The Bengali alphabet or Bengali script is the writing system for the Bengali language and, together with the Assamese alphabet, is the fifth most widely used writing system in the world. The script is used for other languages like Meithei and Bishnupriya Manipuri, and is also used to write Sanskrit within Bengal. Besides, Bengali is the national language of Bangladesh. From a classificatory point of view, the Bengali script is an abugida, i.e. its vowel graphemes are mainly realized not as independent letters, but as diacritics attached to its consonant letters. It is written from left to right and lacks distinct letter cases. It is recognizable, as are other Brahmic scripts, by a distinctive horizontal line running along the tops of the letters that links them together which is known as matra. From a statistical analysis we notice that the probability that a Bengali word will have horizontal line is 0.994.The Bengali script is however less blocky and presents a more sinuous shape [1]. The alphabet of the modern Bengali script consists of 11 vowels and 40 consonants. These characters are called as basic characters. In Bengali script a vowel following a consonant takes a modified shape. Depending on the vowel, its modified shape is placed at the left, right, both left and right, or bottom of the consonant. These modified shapes are called modified characters. A consonant or a vowel following a consonant sometimes takes a compound orthographic shape, which is called as compound character. Compound characters can be combinations of two consonants as well as a consonant and a vowel. Compounding of three or four An Unprecedented Approach of Skew Detection and Correction for Online Bengali Handwritten Words Gouranga Mandal International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 7, Issue 2, February 2018, ISSN: 2278 – 1323 www.ijarcet.org 156 characters also exists in Bengali. There are about 280 compound characters in Bengali. In this work the recognition of Bengali basic characters are considered. The online data collection involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements X (t), Y (t) as well as pen-up/pen-down switching. That kind of data is known as digital ink and can be regarded as a dynamic representation of handwriting as in Fig. 1. The ink signal is captured by either: A paper-based capture device a digital pen on patterned paper a pen-sensitive surface such as a touch screen the information</s>
<s>on strokes and trajectories are mathematically represented in an ink signal composed of a sequence of 2D points ordered by time. No matter what the handwriting surface may be, the digital ink is always plotted according to a matrix with x axis and y axis and a point of origin. Figure 1. Example of a online bengali handwritten word Online data acquisition captures just the information needed, which is trajectory and strokes, to obtain a clear signal. This effective information makes the data easier to process. III. RELATED WORK There are two types of handwritten document images are available. One is the offline handwriting another is online handwriting. Offline printed word recognition is comparatively easier than online handwriting recognition because it comes in printed form, so like handwritten word there are not so much variations there in writing styles. Printed word has not that much skew like online and offline handwriting. Before recognition of any handwritten word we need to do some sort of preprocessing. There are so many preprocessing steps like smoothing, dehooking, skew detection, skew correction etc. As stated earlier, several research works are available on skew detection and skew correction of handwritten document image, but most of those are applicable for offline handwritten document. In online handwritten document few works are there, especially in Bengali. Some of the works on offline handwriting are discussed as follows:– A. Baseline skew correction This approach works with the baseline of Bengali handwriting. First it detects the baseline of the word then calculates the angle of the baseline with horizontal line and then rotates all the pixels with that angle in opposite direction [2]. B. Convex Hull The main objective of employing the pseudo-convex hull is to decrease the use of empirical thresholds in developing this approach. This technique is being used in a way that reduces the minima in a word so that, when filtering undesirable minima, few empirical thresholds will have to be defined. This approach initially used for skew correction on offline data. In this approach the system was tested on 713 offline images of Brazilian bank checks and among those 70% was correctly processed [3]. C. Holistic Approach This approach works based on center of gravity of left part and right part of a handwritten word. After finding the center of gravity all the pixel moves to the particular angle to correct the skew. In this approach the system was tested on 8888 categories of 1,137,664 unconstrained online handwritten Chinese word samples. Experimental results for randomly rotated unconstrained cursive online handwritten Chinese word data demonstrated that the proposed method can achieve about 96.58% recognition accuracy [4]. D. Hough Transform Method Hough transform technique may be applied on the upper envelopes for skew estimation, but this is a slow process. Sometimes digitized image may be skewed and for this situation skew correction is necessary to make text lines horizontal. Skew correction can be achieved in two steps. First, estimate the skew angle θt and second, rotate the image</s>
<s>by θt, in the opposite direction and detect the skew angle is using Matra. This approach applied on offline data with an accuracy of 0.1 degrees. A page image is first divided into 20x30 rectangular blocks and the percentage of black pixels in each block is determined. If that percentage is between 5% and 25%, the block is considered to be non-noise and the Hough transform is calculated from it with a resolution of one degree over the range plus or minus five degrees. The angle with the maximum response in the transform space is used as the center for a further Hough transform analysis at plus or minus one degree with a resolution of 0.1 degrees. [5]. E. Morphological Approach The Mathematical Morphology consists in comparing an unknown picture X with a pattern B, perfectly defined in terms of shape, size and gray scale, named structuring element. In this approach the illustration was done with real examples of handwritten dates on bank checks. [6]. F. Projection Profile A straightforward solution to determining the skew angle of a document image uses a horizontal projection profile. This is a one-dimensional array with a number of locations equal to the number of rows in an image. Each location in the projection profile stores a count of the number of black pixels in the corresponding row of the image. This histogram has the maximum amplitude and frequency when the text in the image is skewed at zero degrees since the number of co-linear black pixels is maximized in this condition. In this approach the projection profiles are calculated at different angles directly from image data. Some methods using this approach calculate projection profiles from image features also. This approach also has been applied on algorithms that use the International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 7, Issue 2, February 2018, ISSN: 2278 – 1323 All Rights Reserved © 2018 IJARCET 157 Hough transform for skew detection and correction. Another class of technique extracted features with local is directionally sensitive masks. The performance of most of the methods using projection profile approach reported in the literature range up to 0.1 degree accuracy. While it is arguable whether this fine resolution is needed in a digital copier application, at least a resolution of 0.2 to 0.3 degrees should be achieved [7]. G. Robust Solution The first step of this method is to divide images into NxN blocks, and then Otsu’s method is applied straightaway in each of the blocks. Each pixel is applied with a nonlinear quadratic filter to fine tune all the pixels according to the local information available. This technique is another skew correction technique for offline handwritten data. An accuracy of 97.7% has been obtained by this method after testing the system on 3045 text lines [8]. H. Skew Correction based on gravity center balancing This approach works based on center of gravity of left part and right part of a handwritten word. After finding the center</s>
<s>of gravity, the angle θ of the line, which connects the two gravity centers in relation to horizontal line, is calculated. Then the word is rotated clockwise by the angle θ if θ<90º, or anti-clockwise by the angle (180º-θ) if θ>90º. All the pixels move to the particular angle to correct the skew. 3000 Bengali words have been tested and obtained around 92.22% accuracy on word data from the proposed system [9]. I. Overlapping Regions In this approach skew detection and correction is done over the words that are separated from text lines. For skew detection, two equal overlapping portions including the whole word. The angle made by the line joining the two centers of mass with the horizontal is considered as the skew angle. The word is skew corrected on rotating pixels by the detected skew angle in the opposite direction [10]. J. Skew Between Two Successive Characters In order to correct the skew between two successive characters the angle between them is calculated first. This is obviously the angle between the two successive points i p and j p corresponding the two characters. Let this angle be denoted as θ, then we can find tanθ = l / b. The base b gives the horizontal distance and the perpendicular l gives the amount of skewness. Thus in the next step the image component between the two segmenting points is rotated upwards or downwards properly to make the skew angle zero. This process is continued for all points in the set Cp. Proper padding ensures that we don’t lose any vital object information during rotation the word in concerned [11]. IV. SKEW DETECTIONN AND CORRECTION The algorithm implemented here is very simple, innovative and its time complexity is very low and at the same it can provide outstanding result of accuracy. Maximum existing algorithm creates problem to the words which contains prolonged part in it. Because of few prolonged part, exact busy zone of a word cannot be determined and exact skew angle also cannot be determined as in Fig. 2. Figure 2. Wrong skew angle obtained due to prolonged part of word This algorithm is based on height and width of the whole handwritten word. As a general point of view height of any word should have minimum value and width of any word should have maximum value if there is no skew. After skew correction with approximate skew angle repetition of the same process, considering only busy zone is done to do the exact skew correction as in Fig. 3. The algorithm of Skew Detection and Correction is as follows- Algorithm: Skew Detection and Correction step 1: Consider co-ordinates (x,y) of all the pixels of handwritten word and find minimum value of y axis (min_y) and maximum value of y axis (max_y). step 2: Calculate the height of the word, i.e.- height = (max_y - min_y) step 3: Rotate all the pixels of the word in anticlockwise direction by 1 degree. i.e.- Rotation of any point (x,y)</s>
<s>by certion angle θ with respect to the point:- x’= xr+(x-xr)cosθ-(y-yr)sinθ and y’=yr+(x-xr)sinθ +(y-yr)cosθ (where x’ and y’ are new generated co-ordinate value, x and y are old co-ordinate value, xr and yr are co-ordinate of centroid of the word.) step 4: Now calculate the height of the word again and again for each 1 degree of rotation (up to 45 degree anticlockwise and 45 degree clockwise). step 5: Find the particular angle where the height of the handwritten word is minimum value and stop the rotation. step 6: If it is observed that height is minimum for more than one angle then, consider co-ordinates (x,y) of all the pixels of handwritten word and find minimum value of x axis (min_x) and maximum value of x axis (max_x) step 7: Calculate the width of the word, i.e.- width = (max_x - min_x) step 8: Now check width of the word for the entire angle where same minimum height exists. step 9: Consider the angle where the width value is maximum for the minimum height and stop the rotation. International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 7, Issue 2, February 2018, ISSN: 2278 – 1323 www.ijarcet.org 158 step 10: Now the skew correction is almost done but not exactly, so calculate busy zone of the word by checking number of pixels available in each y co-ordinate to remove unwanted prolonged part. step 11: Now consider only the busy zone of the handwritten word (don’t consider those pixels which are outside of the busy zone) and repeat step 1 to step 9 once again. step 12: The handwritten word will be skew corrected. Figure 3. Process of Skew Detection and Correction V. RESULT AND DISCUSSION The experimental evaluation of the above algorithm is carried out using online Bengali handwritten words. The handwritten data is collected from the people with different backgrounds. Total of 5,800 Bengali handwritten words are collected as samples for the experiment. Out of them 42% of the words are used for the training of the classifier for the present work and rest is used for the testing purpose. 3364 Bengali words with hook have been tested in our system and around 97.05% accuracy is obtained “see Table 1“. The skew correction accuracy obtained from the classifier is shown in Table: Total Words Skewed Words Corrected Incorrected Skew Correction Accuracy In % 5800 3364 3265 99 97.05% Table 1: Result of Skew detection and Correction Algorithm VI. CONCLUSION This paper represents an innovative technique of skew detection and correction of online handwritten word based on height and width of the word. By this algorithm any online handwritten word can be made skewless. The major drawback of skew correction algorithm (i.e. - proper angle not determined because of prolonged part) is solved here. If a handwritten word is skewed correctly this algorithm can be applied for Bengali and other Indian script. We tested the proposed system on 5800 data out of them 3364 are words with</s>
<s>skew and got the encouraging result. Not much work has been done towards the online recognition of Indian scripts in general and Bengali in particular. So this work will be helpful for the research towards online recognition of other Indian scripts as well as for Bengali in the level of word, text and so on. In fact the work for online recognition of Bengali handwritten word can be done smoothly by taking the help of the current proposed work VII. REFERENCES [1] Mazumdar, Bijaychandra,“The history of the Bengali language” (Repr. [d. Ausg.] Calcutta, 1920. ed.). New Delhi: Asian Educational Services. p. 57. ISBN 8120614526, 2000. [2] Bharath A. and Sriganesh Madhvanath, “online handwriting recognition for indic scripts”, hp laboratories, india, hpl-2008-45, may 5, 2008. [3] marisa e. morita, jacques facon, fl´avio bortolozzi, a. silvio j.a. garn´es, Robert sabourin, “mathematical morphology and weighted least squares to correct handwriting baseline skew”, document analysis and recognition, 1999. icdar '99. proceedings of the fifth int. conf., pp. 430 – 433, 1999. [4] kai ding, lianwen jin and xue gao, “a new method for rotation free online unconstrained handwritten chinese word recognition: a holistic approach”, college of electronic and information, south china university of technology, guangzhou, prc. document analysis and recognition, 2009. icdar '09. 10th int. conf., pp-1131 - 1135 , 2009. [5] farjana yeasmin omee, shiam shabbir himel and md. abu naser bikas, “a complete workflow for development of bangla ocr”, int. journal of computer applications by foundation of computer science, 21(9):1-6, may 2011. [6] marisa e. morita, fl avio bortolozzi, jacques facon, robert sabourin, “morphological approach of handwritten word skew correction”, anais do xi sibgrapi, outubro de 1998. [7] jonathan j. hull, “document image skew detection: survey and annotated bibliography”,document analysis systems ii ,pp 40-64, 1998. [8] u. pal, s. sinha and b. b. chaudhuri, "multi-oriented text lines detection and their skew estimation", proc. in indian conference on computer vision, graphics and image processing, pp. 270-275, 2002. [9] Rajib Ghosh, Gouranga Mandal, “A Novel Approach of Skew Correction for Online Handwritten Words”, International Journal of Computer Applications (0975 – 888), Volume 48– No.9, June 2012. [10] Shahnaz Abubakker, Bapputty Hajia, Ajay Jamesa, Dr.Saravanan Chandranb, "A Novel Segmentation and Skew Correction Approach for Handwritten Malayalam Documents", International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST- 2015), Procedia Technology 24 ( 2016 ) 1341 – 1348. [11] A.Roy, T.K.Bhowmik, S.K.Parui, U.Roy "A Novel Approach to Skew Detection and Character Segmentation for Handwritten Bangla Words", Proceedings of the Digital Imaging Computing: Techniques and Applications (DICTA 2005) 0-7695-2467-2/05 2005 IEEE International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 7, Issue 2, February 2018, ISSN: 2278 – 1323 All Rights Reserved © 2018 IJARCET 159 Authors Profile Mr. Gouranga Mandal pursed Bachelor of Technology in Information Technology from West Bengal university of Technology, Kolkata in 2009 and Master of Technology in Computer Science & Engineering from West Bengal university of Technology, Kolkata in year 2012. He is currently working as Assistant Professor</s>
<s>in Computer Science & Engineering Department, Faculty of Science & Technology, The ICFAI University Tripura since 2017. He has published many research papers in reputed international journals. His main research work focuses on Online Document Image Processing, Optical Character Recognition, Natural Language Processing and analysis in Bengali and other Indian Script and Online Handwritten Document Recognition. He has 6 years of teaching experience.</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/339676286Developing the Bangladeshi National Corpus-a Balanced and RepresentativeBangla CorpusConference Paper · December 2019DOI: 10.1109/STI47673.2019.9068005CITATIONSREADS4 authors, including:Some of the authors of this publication are also working on these related projects:Bangla Machine Translation View projectKhan Md Anwarus SalamIBM Japan19 PUBLICATIONS 54 CITATIONS SEE PROFILEMahfujur Rahman1 PUBLICATION 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Khan Md Anwarus Salam on 04 March 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/339676286_Developing_the_Bangladeshi_National_Corpus-a_Balanced_and_Representative_Bangla_Corpus?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/339676286_Developing_the_Bangladeshi_National_Corpus-a_Balanced_and_Representative_Bangla_Corpus?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Machine-Translation?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-08fa1b8dbd1db57600d81db575c55be3-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_10&_esc=publicationCoverPdf 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), 24-25 December, Dhaka 978-1-7281-6099-3/19/$31.00 ©2019 IEEE Developing the Bangladeshi National Corpus- a Balanced and Representative Bangla Corpus Khan Md Anwarus Salam*, Mahfujur Rahman°, Md Mahfuzus Salam Khanǂ Chief Technology Officer*, Research Coordinator°, Chief Executive Officerǂ Dream Door Soft Ltd. Dhaka, Bangladesh {anwar*, risad°, mahfuzǂ}@dreamdoorsoft.comAbstract— The need for a balanced, representative national scale corpus has been skyrocketing for the already ‘low resource’ tagged language-Bangla. Many sporadic empirical works have been done so far in the field of NLP and Computational Linguistics yet, and these are never enough. Moreover, none of these works can bear the best fruit without the help of a standard corpus. To address these issues, the goal of this research work was set to compile the Bangladeshi National Corpus (BDNC). This paper proposes the development process of the BDNC (first phase- Bangla monolingual corpus). In this work, the whole task was divided into three major phases, where the goal of the first phase is to build a representative monolingual corpus that will include at least 100 million Bangla words. Whereas, in the second phase, there will be a sub-corpora that will consist of a parallel corpus having 1 million words in Bangla and English. However, at the third and final phase, the parallel corpus will incorporate 15 foreign languages (including English) comprising a weighted corpus size of at least 15 million words. Keywords— Bangla, Corpus, balanced, representative, monolingual corpus, multi-lingual corpus, translation corpus, parallel corpus. I. INTRODUCTION Bangla, also known as Bengali, is the national language in Bangladesh and even a mother tongue in the Indian state of West Bengal. Bangla has more than 260 million speakers worldwide, and it is the sixth most spoken language in the world [22]. However, Bangla is still considered a low-resource language because of the unavailability of a balanced corpus with digitally accessible resources. Corpus is a much needed structured data set of language instances that work as a heart for many tools of Natural Language Processing (NLP). Researchers of different scientific domains also find it as a useful tool. However, for many practical reasons, despite having such advantages, there are not many instances of the balanced, representative corpus being developed for Bangla. Bangla is already considered a low resource language as far as language technology concerns and the lack of having a standardized corpus is also a reason behind it. Needless to say, the relation between these two problems can be labelled as an example of bidirectional causation. Throughout the</s>
<s>document, we’ll be discussing our approaches to build the corpus and methods that we’ll be using in the course of corpus creation. Bangladesh is often considered as one of the fastest emerging nations in the world in terms of economic growth. Besides, the country has a reputation in utilizing IT in the most creative and effective ways to solve many of its problems. Yet, the challenges of the 4th Industrial revolution are enormous to countries like Bangladesh. For Bangladesh, the readiness for Industry 4.0 means, being equipped with some sets of prerequisites that include- Bangla Language Processing (NLP) techniques, tools, and various AI solutions (Bangla enabled) among other major phenomena. A well-made, maintained balanced and representative corpus gives a solid ground for Bangla NLP researches and other related fields, thus fostering the backbone development required to face the challenges of Industry 4.0. There are already some notable works being done in the field of corpus creation. Salam, Yamada and Nishino [2] proposed a balanced corpus for Bangla language for the first time. Sarkar, Pavel and Khan [17] attempted an automatic corpus creation process where they collected all the already available texts from the web and other offline resources as the text source of the corpus. Another attempt was the creation of CIIL corpus Dash and Chaudhuri [16], which was actually a collection of corpus or corpora of nine Indian languages including Bangla. The corpus has a size of 3 million words. Mumin, Shoeb, Selim and Iqbal have built a corpus titled SUPara [18], which was an English-Bangla parallel corpus in 2011. The corpus has more than 200000 words in either language. The same authors created another corpus named SUMono [14] in 2013 which was actually a monolingual corpus consisting of a word size of more than 27 million. This corpus was created following the framework of the American National Corpus. Another such parallel corpus creation attempt was carried out recently though the data collection method was crowd-sourcing [22]. This Bangla-English corpus has a total of 517 Bangla sentences and 2143 corresponding English translations while every Bangla sentence was translated by an average of 4 times via crowd-sourcing. Shamshed and Karim [20] proposed a corpus intended for an efficient way of information retrieval. A newspaper specific corpus was created by Majumder and Arafat [19] where the authors used texts from a Bangla daily newspaper for a particular year. Khan, Ferdousi and Sobhan [15] created another Bangla corpus titled “BDNC01”. The size of the corpus was 12 million words and the texts were collected from some of the Bangla daily newspapers and some Bangla literature. II. DEVELOPMENT PHASES OF BDNC A. First Phase (Bangla Monolingual Corpus) A monolingual corpus can be either general or special. In our scope, we are up to build the monolingual corpus as a general one so that it can eventually represent the national variety of colloquial Bangla language [13]. However, the corpus, in the long run, will also reflect the diachronic features of Bangla language. Below is</s>
<s>the flow-chart showing the principal steps that we have considered while developing our corpus (mono-lingual) in the first phase. Fig. 1. The development process of Bangla corpus B. Second & Third Phase (Multilingual Parallel Corpora) In the second and third phase of the Corpus development task, we will be using the following flowchart as the goal is to develop multilingual parallel corpora. Fig. 2. The development process of parallel corpus In principle, the goal of the second phase is to translate (human aided) or gather already translated texts (in both Bangla to English and English to Bangla) in order to build a parallel corpus (translation corpus) in Bangla-English. And the aim of the third phase of the project is to translate manually (human aided) popular website contents and other available resources written in Bangla or English to a number of foreign languages including- Arabic, Bangla, Spanish, French, Mandarin, Japanese, Korean, Hindi, Persian, Burmese, Bhutanese, Urdu, Russian, German, Portuguese and English. III. CORPUS DESIGN Before starting with building the actual corpus it is mandatory to design the corpus with proper alignment with the goal and purpose of the corpus itself. We considered two major criteria to design a corpus- one is the Purpose design and another one being Model design. The following table states the different minor notions that we have also considered while designing the criteria above. TABLE I. CORPUS DESIGN CRITERIA Purpose Design Model Design Scope of usage defining Corpus Typology design User defining Tagset design Service and QoS design Storage and Database design To make a balanced and representative corpus, we are following three independent selection criteria: domain, time and medium [2]. We followed the Chinese SINICA corpus design methodology and added three more attributes, author, writing level and target audience. Table II shows the proposed domain balance percentage. TABLE II. DOMAIN BALANCE PERCENTAGE Domain / Source Percentage Text Books 20% Mass Media 20% Literature 15% Spoken corpus 10% Translations 5% IV. THE DEVELOPMENT PROCESS OF THE BANGLA MONOLINGUAL CORPUS After designing the purpose and model of the corpus, one can start building the corpus. Following are the steps that we have followed to build the monolingual corpus. A. Collecting Raw Data In order to maintain representativeness and to build a balanced corpus, texts are to be collected from various sources that will ensure all the features (both spoken and written forms of colloquial Bangla language used in various domains) and objectives (balanced, having representativeness) that the corpus should hold. The text can be collected in many ways including the followings- using OCR, web-crawling, typewriting, existing electronic text, using STT etc. • Using OCR: Optical Character Recognition (OCR) is considered a way of obtaining electronic texts from books. In this case, human aided proofreading or editing is needed, to correct scanning errors and other technical errors. • Typing: Right now scanner machines and computer programs are not efficient enough at recognizing Bengali texts of different typefaces, lower-quality typography, or handwriting. Therefore, typing can be considered as a solution,</s>
<s>though it is a labour-intensive and resource-hungry option. Still, this method is better for leaflets, hand-written items, and recorded speech. • Existing electronic texts: There are many texts already exist in electronic form in Bengali which is a great source of text- such as Wikipedia, Baglapedia, Newspapers, Magazines and etc. • STT: Recorded speech can also be transformed into electronic texts using speech to text tools. This kind of component will help much in building a collection of texts of oral form. In our work, primarily we have collected the data from different web-domains (online newspaper, Wikipedia, Banglapedia etc.) using self-made web-crawler tools. The collected data mainly represent the written aspects of the Bangla language. However, according to the original plan, we’re about to include spoken corpus and scale up the current corpus to the targeted size. For collecting data from different websites we developed and used a web-crawler that can detect the targeted content and fetch it. B. Encoding Adjustment It’s needed to be assured that, all the collected texts are in UTF-8 (Unicode) format prior to proceeding further in building this corpus. If any of the text segments is found written in non UTF-8 then, these must be converted back into Unicode. During the text collection phase, we have found that not all the Bangla text data available online are in UTF-8 format. There are still some ANSI encoded Bangla texts available on the web for legacy reasons. To solve this problem, we have developed an encode-adjusting tool that looks for encoding issues across the collected texts and adjusts and convert encodings while required. C. Filtering The collected text must be filtered for any unwanted, unrecognized, foreign language, misspelt words and garbage characters. Filtering can be done automatically by developing tools specifically designed for Bangla language. Primarily we have taken care of the unwanted characters, symbols and spacing issues persisting in the electronic texts using a home-developed tool. However, due to lack of an advanced spell checker, we couldn’t check the spellings of the texts. In fact, in our current scope, we do not intend to check spellings as it is just a written corpus for now. D. Word Segmentation & Tokenizing The next big step after filtering is segmentation/tokenizing. The process of segmenting running text into words and sentences is called tokenizing. For languages like Bangla where word segmentation can be performed by a simple script given white-space and punctuation, but still, it doesn’t guarantee a 100 percent success. A tokenizer capable of handling as many as linguistically ambiguous features can only be accepted here. A token has to be linguistically significant and Methodologically useful. In our work, we have developed a beginner level tokenizer that can break a running sentence into word forms which were later labelled by the annotator. E. Annotation (Tagging) We’ll be using the universal format of CoNLL-U for annotation purpose. In CoNLL-U format, annotations are encoded in plain text files (UTF-8, using only the LF character as line break) with three types</s>
<s>of lines: 1. Word lines containing the annotation of a word/token in 10 fields separated by single tab characters. The fields are namely- ID, FORM, LEMMA, UPOSTAG, XPOSTAG, FEATS, HEAD, DEPREL, DEPS, MISC 2. Blank lines marking sentence boundaries. 3. Comment lines starting with a hash (#). Example of annotating a Bangla sentence using CoNLL-U format: # newdoc id = Rabindra_cd_20170926063000_BN # sent_id = Rabindra_cd_20170926063000_BN-0001 # text = রােজশ ু েল যায়। 1 রােজশ রােজশ PROPN NNP Number=Sing 0 root _ _ 2 ু েল ু ল NOUN NN Number=Sing 1 obl _ _ 3 যায় যায় VERB VBZ Mood=Ind|Tense=Present 1 _ _ 4 । । PUNCT । _ 1 punct _ _ V. TOOLS TO UTILIZE THIS CORPUS We have developed some corpus analyzer tools of our own as there are very few resources available in this segment. Very few of the tools available nowadays support Bangla language. We have developed a frequency analyzer, N-grams (lexical bundles), concordance (node, KWIC, sorting, expanded context). VI. RESULT AND ANALYSIS Following are some of the results that were analyzed by the tools that we have developed. We have separated our corpus in 4 different plain text files of different sizes without compromising any of the qualitative features of the corpus like text-domain and other text qualities. Four parts of the plain text containing files were created in this separation process namely- mini, kilo, mega, giga. The reason behind such segmentation of the corpus file was that we wanted to make sure the corpus is easily manageable and scalable. A. Data structure Our primary analysis suggests that the 4 documents contain a number of 7,678,597 total words (tokens) while all the documents combined hold a total of and 285,496 unique word forms (types). The weighted average of Type-Token Ratio all the corpus is 0.0372 TABLE III. WORD TYPES AND DISTRIBUTIONS IN THE CORPUS (4 FILES) File Words Types Ratio Word/sentence Mini 445868 45254 0.10149 14.269145838 Kilo 756241 62095 0.08211 14.262239740 Meg2328455 137241 0.05894 13.801686938 Giga 4148033 180135 0.04342 14.211335402 Document Length: Longest: giga (4148033 words); mega (2328455 words) Shortest: mini (445868 words); kilo (756241 words) B. Word frequency It’s known that, the most frequent words in a written corpus are usually the stop words. Stop words are generally filtered out in many applications of NLP and other studies. However, here we have considered all the varieties of lexical items while preparing the word frequency list. The following table shows a frequency analysis of the lexical items that persist in the corpus. TABLE IV. MOST FREQUENT WORDS IN THE CORPUS Word frequen % word frequency ও 74865 0.97498280 এই 27406 0.35691416 এ 51757 0.67404241 বেলন 23384 0.30453480 না 51418 0.66962754 িতিন 22981 0.29928645 কের 50955 0.66359779 এবং 22596 0.29427251 থেক 39744 0.51759456 িনেয় 22416 0.29192833 হয় 34932 0.45492686 এর 21860 0.28468742 করা 34615 0.45079850 হে 21447 0.27930884 হেব 29297 0.38154105 এক 21220 0.27635257 হেয়28015 0.36484530 কর21186 0.27590978 জন 27663 0.36026113 ম20931 0.27258886 C. Type-Token Ratio (TTR) The ratio of the total number of words</s>
<s>(token) in a document to the number of unique words (types) in the document is called Type-Token Ratio. Highest: mini (0.101) kilo (0.082) Lowest: giga (0.043) mega (0.059) A lower vocabulary usually density indicates complex text with a pool of unique words, and a higher ratio indicates simpler text with words reused. The data indicates that, the file mini and kilo contain more ‘function words’ in regard to unique or content words than their siblings’-giga and mega. Average Words per Sentence: In our corpus, we have found that the weighted average of words per sentence in our corpus is: 14.1. Below is the file specific average word per sentence rate Highest: mini (14.3) kilo (14.3) Lowest: giga (14.2) mega (13.8) D. Collocation (N-gram analysis) We have analyzed most co-occurring words or words cluster known as collocation using N-gram architecture. Below are some of the discovered collocation data of the corpus which was measured using different N-gram techniques (uni-gram and trigram). TABLE V. THE COLLOCATION OF THE WORDS IN THE CORPUS (UNI-GRAM) Worcount collocatcounword count collocacounকরা 34615 হয় 7469 করা 34615 হে 1808 করা 34615 হেয়েছ 7204 হয় 34932 না 1670 এ 51757 ছাড়া 3457 এ 51757 ধরেনর 1649 এ 51757 সময় 3232 না 51418 থাকেল 1541 করা 34615 হেব 3034 হয় 34932 এ 1526 হেব 29297 না 2499 এ 51757 িবষেয় 1500 এ 51757 ব াপাের 2197 এ 51757 জন 1354 করা 34615 হে 1808 এ 51757 কথা 1284 হয় 34932 না 1670 হেব 29297 এর 1249 এ 51757 ধরেনর 1649 হেয়েছ 28015 এ 1239 TABLE VI. COLLOCATION OF THE WORDS IN THE CORPUS (TRI-GRAM) Worcount collo-cate count word count collo-cate count করা 34615 হয় 7736 এ 51757 করা 2493 করা 34615 হেয়েছ 7431 এ 51757 ব াপাের 2297 এ 51757 ছাড়া 3517 হয় 34932 না 2210 করা 34615 হেব 3505 হেয়েছ 28015 এ 2201 এ 51757 সময় 3496 এ 51757 জন 2095 হেব 29297 না 3103 না 51418 করেত 2088 হয় 34932 এ 2955 না 51418 করা 2073 না 51418 কােনা 2737 করা 34615 না 2030 কের 50955 এ 2670 না 51418 না 1996 করা 34615 এ 2581 কের 50955 থেক 1977 E. Data visualization We have analyzed the data using many other techniques and tools available and developed by us and now we are to visualize some of the aspects of the corpus. Here are some examples of comparative corpus data visualization across multiple corpus data files. Relative frequency: To find the relative frequency of any lexical item in our corpus, we need to divide the frequency of the lexical item by the total number of lexical items in the sample. In our case, the samples are the 4 separated data files of the corpus. The following chart shows the relative frequencies of the most frequent words across 4 different corpus data files. Fig. 3. Relative frequency of the top 4 (most frequent) words F. Grammatical analysis: We wanted to use the corpus for some more linguistic (traditional grammatical) researches as shown in Fig. 3. Therefore, we observed</s>
<s>the comparative frequency of some of ‘অব য়’ (which is a part of speech or grammatical category name in Bangla grammar). In comparison to English grammar, ‘অব য়’ can be used as both prepositions, conjunction and interjection in a sentence of Bangla language. ‘ও’ and ‘এবং’ are a somewhat similar type of POS in Bangla language considering their semantic boundary and are used as a conjunction. We wanted to see how frequent are these two words and which one is more frequent than the other in Bangla language (in the context of our corpus). Below is the graph showing the result in Fig 4. Fig. 4. relative frequency of Bangla ‘অব য়’- ( ‘ও’, ‘এবং’) CONCLUSION The development of a corpus in our targeted scale is not only a huge task but also a tiring and a resource-hungry job. However, still, we have compiled a corpus having a size of over 7.6 million in size. Due to limitation of time and resources, we could not annotate the entire corpus with the full features that we have primarily expected. In the future, we are going to annotate the entire corpus and scale up the size of the existing corpus. Therefore we will start developing the parallel corpus shortly. REFERENCES [1] Gerrit Botha and Etienne Barnard, 2005. Two approaches to gathering text corpora from the World Wide Web, Proceedings of the 16th Annual Symposium of the Pattern Recognition Association of South Africa. [2] Salam, K. M. A., Yamada, S., & Nishino, T. (2012, May). Developing the first balanced corpus for Bangla language. In Informatics, Electronics & Vision (ICIEV), 2012 International Conference on (pp. 1081). IEEE. [3] Salam, K. M. A., Yamada, S. and Nishino, T. 2010. "English-Bengali Parallel Corpus: A Proposal", Tokyo, TriSAI – 2010 [4] Salam, K. M. A., Yamada, S., Nishino, T. Mumit Khan, 2009 "Example-Based English-Bengali Machine Translation Using WordNet", Tokyo, TriSAI – 2009 [5] Tony McEnery and Andrew Wilson, 1996. Corpus Linguistics, Edinburgh University Press. [6] Yeasir Arafat, Md. Zahurul Islam and Mumit Khan, 2006. Analysis and Observations From a Bangla news corpus, Proc. of 9th International Conference on Computer and Information Technology, Dhaka, Bangladesh. [7] Baker, Mona (1995) “Corpora in translation studies: an overview and some suggestions for future research” Target 7, 2, pp 223-243. [8] Biber, Douglas (1993) “Representativeness in corpus design”, in Literary and Linguistic Computing, 8, pp 243-257. [9] Chen, Kehjiann, Chu-ren Huang, Li-ping Chang and Hui-li Hsu. 1996. SINICA CORPUS: Design methodology for balanced corpora. Language, Information and Computation 11:167-176. [10] Dash, Niladri Sekhar and Chaudhuri, B.B. 2001. A corpus-based study of the Bengali language. Indian Journal of Linguistics. Vol.20. No.1. Pp. 19-40. [11] Dewan Shahriar Hossain Pavel, Asif Iqbal Sarkar and Mumit Khan, 2006. A Proposed Automated Extraction Procedure of Bangla Text for Corpus Creation in Unicode, Proc. International Conference on Computer Processing of Bengali. [12] Frankenberg-Garcia, A. and Santos, D. (2003) “Introducing COMPARA: the Portuguese-English Parallel Corpus”, Corpora in translator education, Citeseer pp 71—87. [13] Zanettin, F. (2011). Translation and corpus</s>
<s>design. [14] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Sumono: A representative modern bengali corpus,” SUST Journal of Science and Technology, vol. 21, pp. 78–86, 2014. [15] S. Khan, A. Ferdousi, and M. A. Sobhan, “Creation and analysis of a new bangla text corpus bdnc01,” International Journal for Research in Applied Science & Engineering Technology (IJRASET), vol. 5, 2017. [16] N. S. Dash, B. B. Chaudhuri, P. Rayson, A. Wilson, T. McEnery, A. Hardie, and S. Khoja, “Corpus-based empirical analysis of form, function and frequency of characters used in bangla,” in Published in Rayson, P., Wilson, A., McEnery, T., Hardie, A., and Khoja, S.,(eds.) Special issue of the Proceedings of the Corpus Linguistics 2001 Conference, Lancaster: Lancaster University Press. UK, vol. 13, 2001, pp. 144. [17] A. I. Sarkar, D. S. H. Pavel, and M. Khan, “Automatic bangla corpus creation,” BRAC University, Tech. Rep., 2007. [18] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Supara: A balanced english-bengali parallel corpus,” 2012. [19] K. M. Majumder and Y. Arafat, “Analysis of and observations from a bangla news corpus,” 2006. [20] J. Shamshed and S. M. Karim, “A novel bangla text corpus building method for efficient information retrieval,” Journal of Convergence Information Technology, vol. 1, no. 1, pp. 36–40, 2010. [21] Arora, S., Arora, K. K., Roy, M. K., Agrawal, S. S., & Murthy, B. K. (2016). Collaborative Speech Data Acquisition for Under Resourced Languages through Crowdsourcing. Procedia Computer Science, 81, 37-44. [22] Nowshin, N., Ritu, Z. S., & Ismail, S. (2018, December). A Crowd-Source Based Corpus on Bangla to English Translation. In 2018 21st International Conference of Computer and Information Technology (ICCIT) (pp. 1-5). IEEE. [23] Salm, K. M., Salam, A., Khan, M., & Nishino, T. (2009). Example based English-Bengali machine translation using WordNet. [24] Khan, M. A. S., Uchida, H., & Nishino, T. (2011, November). How to develop universal vocabularies using automatic generation of the meaning of each word. 7th International Conference on Natural Language Processing and Knowledge Engineering. IEEE. [25] Salam, K. M. A., Yamada, S., & Nishino, T. (2011). Example-based machine translation for low-resource language using chunk-string templates. 13th Machine Translation Summit, Xiamen, China. [26] Salam, K. M. A., Yamada, S., & Nishino, T. (2013). How to translate unknown words for English to Bangla Machine Translation using transliteration. Journal of computers, 8(5), 1167-1174. [27] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2012, August). UNL Ontology Visualization for Web. In 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (pp. 542-545). IEEE. [28] Salam, K. M. A., Uchida, H., & Nishino, T. (2012, December). Multilingual universal word explanation generation from unl ontology. In 24th International Conference on Computational Linguistics (p. 137). [29] Salam, K. M. A., Setsuo, Y., & Nishino, T. (2011, December). Translating unknown words using WordNet and IPA-based-transliteration. In 14th International Conference on Computer and Information Technology (ICCIT 2011) (pp. 481-486). IEEE. [30]</s>
<s>Uchida, H., Zhu, M., & Khan, M. A. S. (2012, December). UNL explorer. In Proceedings of COLING 2012: Demonstration Papers (pp. 453-458). [31] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2013). Web Based UNL Ontology Visualization. Journal of Convergence Information Technology, 8(13), 69. [32] Salam, K. M. A., Setsuo, Y., & Tetsuro, N. (2012, December). Sublexical Translations for Low-Resource Language. In Proceedings of the Workshop on Machine Translation and Parsing in Indian Languages (pp. 39). [33] Salam, K., Yamada, S., & Tetsuro, N. (2012). Phonetic Bengali Input Method for Computer and Mobile Devices. In the Proceeding of the Second Workshop on Advances in Text Input Methods (WTIM 2), COLING (pp. 73-78). [34] Chaudhury, S., Dasgupta, S., Munawar, A., Khan, M. A. S., & Tachibana, R. (2017, September). Text to image generative model using constrained embedding space mapping. In 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) (pp. 1-6). IEEE. [35] Salam, K. M. A., Setsuo, Y., & Nishino, T. Using WordNet to Handle the OOV Problem in English to Bangla Machine Translation. In GWC 2012 6th International Global Wordnet Conference (p. 35). [36] Salam, K. M. A., Yamada, S., & Tetsuro, N. (2017, July). Improve Example-Based Machine Translation Quality for Low-Resource Language Using Ontology. In International Conference on Applied Computing and Information Technology (pp. 67-90). Springer, Cham. [37] SALAM, K. M. A. (2014). Ontology Based Machine Translation for Bengali as Low-resource Language (Doctoral dissertation, UNIVERSITY OF ELECTRO-COMMUNICATIONS). [38] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2013, June). Universal Words relationship question-answering from UNL Ontology. In 2013 IEEE/ACIS 12th International Conference on Computer and Information Science (ICIS) (pp. 423-427). IEEE. [39] Salam, K. M. A., Tetsuro, N., & Yamada, S. (2012, December). Bangla Phonetic Input Method with Foreign Words Handling. In Proceedings of the Second Workshop on Advances in Text Input Methods (pp. 73)View publication statsView publication statshttps://www.researchgate.net/publication/339676286 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold</s>
<s>/Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic</s>