text
stringlengths
41
31.4k
<s>A Crowd-Source Based Corpus on Bangla to English Translation21st International Conference of Computer and Information Technology (ICCIT), 21-23 December, 2018A Crowd-Source Based Corpus on Bangla toEnglish TranslationNafisa NowshinComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshnafisanowshin107@gmail.comZakia Sultana RituComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshzakiaritu.cse@gmail.comSabir IsmailComputer Science andEngineeringStony Brook UniversityNew York, United Statessabir.ismail@stonybrook.eduAbstract—In this paper, we present a crowd-source basedBangla to English parallel corpus and evaluate its accuracy. Acomplete and informative corpus is necessary for any languagefor its development through automated process. A Bangla toEnglish parallel corpus has importance in various multi-lingualapplications and NLP research works. But there is still scarcityof a complete Bangla to English parallel corpus. In this paperwe propose a large scale crowd-source method of constructionof a Bangla to English parallel corpus through crowd-sourcing.We chose crowd-sourcing method to venture a new approachin corpus construction and evaluate human behavior pattern indoing so. The translations were collected form under graduatestudents of university to ensure strong language knowledge.A Bangla to English parallel corpus will help in comparinglinguistic features of these languages. In this paper we presentan initial dataset prepared via crowd-sourcing which will serveas a baseline for further analysis of crowd source based corpus.Our primary dataset is consists of 517 Bangla sentences and forevery Bangla sentence, we collected 4 English sentences on anaverage and 2143 English sentences in total via crowd-sourcing.This data was collected over a period of 2 months and from 62users. Finally we analyze the dataset and give some conclusiveidea about further research.Keywords—Natural Language Processing(NLP) , machinelearning, corpus, crowd-source data, Bangla to English translation.I. INTRODUCTIONA corpus is basically a collection of written texts or spokenmaterial of a language that is processed to learn about thatlanguage’s behavior. Construction of a corpus is the fun-damental of most language research works. In this sectorBangla is still a little behind than other languages. Thereare a number of Bangla corpora available right now, likeSUMono [1], BdNC01 [2]. But none of these corpora areconstructed based on crowd-sourced data. So we wanted toimplement crowd source method in constructing a Bangla toEnglish parallel corpus. Crowd-sourced data means, the dataor information obtained for a particular task or project byenlisting the services of a large number of people, typicallyvia the Internet. So, what we proposed to do is, by the meansof crowd-sourcing, collect the text data of Bangla to Englishtranslation and evaluate the data collected to understand thevariations and structure of the data.A parallel corpus is a corpus that contains collection oforiginal texts of a language and their translation in a set oflanguages. In our case, the Bangla to English parallel corpushas Bangla text data and it’s English translation. parallelcorpora has many uses in various fields, like comparinglinguistic features of two languages, investigating similaritiesand differences between the source and the target language,helps in translation studies and machine translation relatedresearches. A Bangla to English parallel corpus will help usin all of these sectors and also as it is prepared via crowd-sourcing we also get the information of behavioral pattern ofusers while translating the sentences. This can help us furtherin machine</s>
<s>translation researches regarding Bangla languageand will help determine the behavioral pattern in that case.Although crowd-sourced data is relatively new in NLPrelated research sectors, it is gaining popularity fast as trainingmodels for machine learning. This process helps get new in-sights and helps incorporate understanding of human behaviorwith regard to machine learning. For all these reasons theimportance of crowd-sourced data is increasing rapidly. In thispaper we have tried to propose a new method of parallel corpusconstruction applying this popular method.This paper is arranged as follows, in section 2, we have shedlight on the previous works regarding corpus construction andtried to give an overview of the present condition in this sector.In section 3 we have discussed our reasons for choosing thismethod. Then in section 4 we have discussed in detail thefull methodology of our work with examples of our data. Weshowed an analysis of the data collected in section 5. Weconclude in section 6.II. BACKGROUND STUDYCorpus construction is one of the most important part of anytype of language research work. The strength of the digitalpresence of a language depends on the availability of thatlanguage’s proper and complete corpus. So, much attentionhave been given in corpus construction in NLP sector. Banglais no different, much work has been done, and many processeshas been evaluated in constructing a complete Bangla corpus.We will discuss some of these works below.978-1-5386-9242-4/18/$31.00 ©2018 IEEEThe process of constructing a Bangla corpus started longago. Dash and Chaudhuri [3], constructed a small scale Banglacorpus along with 9 other Indian languages called the CIILcorpus. It consists of only 3 million words. Because of thesmall size of this corpus, it has failed to ensure its represen-tativeness of Bangla language.Automatic Bangla Corpus Creation was attempted bySarkar, Pavel and Khan [4]. The process they followed wasthat they collected all free Bangla documents from the webwith the help of a web crawler and collected available offlineBangla text documents. Then they extract all the words inthese documents to make a huge repository of text and thenconverted them to unicode text.Salam, Yamada and Nishino [5] proposed the first balancedcorpus for Bangla language. They built the corpus dependingon three independent criteria, time, domain and medium. Astheir goal was to construct a balanced corpus, they also addednecessary additional details to the collected text like samplesize, details of the author, topic etc. The source of their datawas, literature text data, Bangla academic papers, Bangla textbooks, newspaper articles, TV and radio news scripts, Banglatechnical manuals, Legal documents written in Bangla. We cansee that to make the corpus representative and balanced theycovered a wide range of text sources.Mumin, Shoeb, Selim and Iqbal [1], constructed a newBangla corpus named SUMono. This corpus consists of morethan 27 million words. The SUMono corpus was constructedfrom available online and offline Bangla text data that includesarticles from six types of topics. This corpus was constructedfollowing the framework of the American National Corpus(ANC). SUMono corpus includes written texts from writersof various backgrounds, Bangla newspaper articles availableonline, Bangla text data from various websites etc. Becauseof the variety of the types of data</s>
<s>available in this corpus, itsrepresentativeness of Bangla language has been ensured.They also built a English-Bengali parallel corpus which isknown as SUPara [6]. In building this corpus their main focuswas to make it a balanced corpus. It contains variety of textsfrom different domains. They first converted the plain texts tounicode and then they were marked up according to corpusencoding standard. This corpus is open for educational andresearch purpose.There also exists specified variations of Bangla corpus. Agood example of which is the corpus named “Prothom-Alo”.[7] which is a corpus built solely with news articles publishedin a popular Bangla newspaper named “Prothom-Alo”, for theyear 2005. They first collected the texts from the website of thenewspaper. Then the text was extracted and categorized. Thenthey were converted to Unicode. But as this corpus consists ofvery specified data, it can not be used in many NLP researchworks.Khan, Ferdousi and Sobhan [2] created a new Bangla textcorpus named “BdNC01”. Text source for this corpus is,articles collected from web editions of several influential dailynewspapers and literary works of old and modern writers.It contains nearly 12 million words. The text data for thiscorpus was collected over a time of 6 years to avoid timedependencies. After collection and processing of text data,it was added to the repository and statistical computationswere done on it for better understanding of Bangla linguisticbehavior.Shamshed and Karim [8]. also proposed a method forBangla text corpus creation. They proposed to use this corpusfor Efficient Information Retrieval system. As they propose touse this corpus for information retrieval, all the text in theircorpus are document specified. Their text source was Banglabooks and Bangla web data. After collecting and formattingtext data, they calculated term frequencies and then appliedrandom walk algorithm on the data. Then they had to assemblethe meta data.Finally, we can say that there is rich literature growing oncorpus construction techniques and there is much scope ofimproving this sector. Most research works discussed in thissection has more or less same type of development process.They varied in their text source, their size and their collectionof various topics to represent Bangla language. But the mostimportant factor to be noted from this discussion is that noneof these works involve crowd-sourced data. As a matter of fact,the process of constructing a text corpus using crowd-sourceddata has not been attempted before. So, we are proposing anew process of corpus construction.III. WHY CROWD-SOURCED CORPUSThere has been various approaches to parallel corpus con-struction process. They mostly focus on collecting the textdocument of one language from web pages or written textfiles and then converting them to unicode. Then they aremarked up according to corpus encoding standard for XMLand then aligned. But We tried a new approach, first weconstructed the Bangla corpus containing simple and smallBangla sentences and then collected crowd-sourced data forthe English translation of these sentences. This way we gotmore than one translated sentence for each Bangla sentenceand could compare the output. This process also gives usinsight on human behavior in case of translation of onelanguage to another. The process is discussed in detail in thenext</s>
<s>section.IV. METHODOLOGYA. Data PreparationIn the first step, we focused on preparing the Bangla textdata. The Bangla text data in our corpus consists of simple andsmall Bangla sentences, mostly with only one verb. We haveworked with almost the same sentence pattern, the change inthe sentence with the change of tense of verb and with thechange of person of verb. This way we got many variationsof one sentence. The reason behind doing this was to comparethe result we get from crowd-sourcing and see their behavioralpattern with small change in sentences. Below is some of theexamples of the sentences that is present in our corpus-• আিম ভাত খাই।• আিম ভাত খািচ্ছ।• আিম ভাত খািচ্ছলাম।• আিম ভাত খােবা।• বাবা বাজাের েগেছন।• বাবা বাজাের যােবন।• বাবা বাজাের যােচ্ছন।• বাবা িক বাজাের েগেছন?• কৃষক েক্ষেত কাজ করেত যােচ্ছ।• েস িক ঢাকা শহের বাস কের?• বৃিষ্ট না হেল আমরা বাইের যাব।• েখলাধুলা সব্ােস্থর জেন উপকারী।For preparing this text corpus we went through some Banglato English translation books. We prepared the corpus bytaking help from school level English grammar books [9].These books cover Bangla to English translation and grammarstructures. As can be seen from the example of the sentencesabove, we tried to cover assertive, interrogative, negative,conditional and imperative type of sentences. We also triedto focus on the variations of gender, tense, person of the samesentence. The details regarding the Bangla part of the corpusis given in table I.Table I: DETAILS OF BANGLA PART OF THE CORPUSTotal sentences 517Total words 2352Average sentence length 5 wordsFig. 1. Statistics of the Bangla SentencesB. Data CollectionThe English translation of our Bangla sentences were col-lected through crowds-sourcing. for this purpose, we devel-oped a web interface for collecting translations from people.Figure 2 and Figure 3 show some of the screen shots of theinterface.Using this website we collected data from people. As seenfrom the photo of the interface, we gave them a random Banglasentence from the corpus and they had to add the Englishtranslation of the respective sentence. We collected 3 to 5Fig. 2. The Sentence ListFig. 3. Adding English translation of a sentenceEnglish translated sentences for each Bangla sentence whichresults in 4 translated sentences against each Bangla sentenceon an average. For the 517 Bangla sentences in our corpus wegot a total of 2143 English translated sentences. The detailsof the English part of the corpus is given in table II.Table II: DETAILS OF ENGLISH PART OF THE CORPUSTotal sentences 2143Total words 13062Average sentence length 6 wordsIn table III we show some of the translated sentencescollected through crowd-sourcing and the data we have gotthrough it.These data were collected from a group of universitystudents where medium of study is English and Bangla is thefirst language. Our dataset consists of mostly simple sentences,and the user group chosen for collecting the data are well adaptand capable in translating them. There were 62 contributors intotal for preparing this dataset. The data was collected over aperiod of 2 months. Source code of the website that has beenused for collecting translations is available in github [10] alongwith the collected translations.V. RESULT ANALYSISAs</s>
<s>stated earlier, for each Bangla sentence we got 4English translated sentences on an average but number oftranslations received for any sentence varied with sentenceTable III: COLLECTED DATABangla sentence English translationবাবা বাজাের যােবন।• Dad will go to bazar.• Father will go to the market.• father will go to the market• Dad will go to market.• Father will go to office.বাবা িক আজ বাজােরযােবন?• Will father go to market today?• Will father go to bazar today?• Will father go to bazar today?আিম এখন ভাত খােবানা।• I won’t eat rice now• I will not eat rice now.• I won’t eat rice now.• i wont eat rice nowআিম গতকাল ব স্ত িছ-লাম।• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.• I was busy yesterday.বাচ্চারা মােঠ েখলেছ।• children are playing in the field.• Kids are playing in the field.রিহম েবড়ােত যােচ্ছ।• Rahim is going to visit.• Rahim is going outside.• Rahim is going to a tour.তুিম িক কাজিট েশষকেরছ?• Are you finished the job ?• Have you done the work?• did you finish the work?কৃষক িক েক্ষেত কাজকরেছ?• is farmer working on his farm.• Farmer is working in the field?• Is Farmer working in the field?• Is farmer working in the field.েখলাধুলা সব্ােস্থর জেনউপকারী।• sport is beneficial for health.• The sport is beneficial for health.• Sports are beneficial for health.• Sports is better for health.Table IV: TIME AND CONTRIBUTORTotal Contributors 62Time Required 2 Monthslength. Sentences of length 3, 4 and 5 was translated mostlyby users. The average number of translations received for anysentence length is shown in the graph in figure 5.As seen in the previous section, we got a number oftranslated sentences for each Bangla sentence. The translatedsentences has some variations from user to user. We discussthese variations and the reasons behind them in this section.In case of very simple and small sentences all the transla-tions we got are almost same and correct. for example-1) আিম ভাত খাই না।• I don’t eat rice.• I don’t eat rice.• I do not eat rice• I don’t eat rice.As seen in the above example, the sentence is very smalland simple and there is not much variation in the wayFig. 4. An overview of user contributionFig. 5. Statistics of Average translations of Bangla sentenceaccording to length (in words)different people translated it. But when the sentence hasnouns and pronouns the translation gets more varied. forexample-2) বাবা বাজাের েগেছন।• Dad went to the market.• Father went to bazar.• Father has gone to the market.• Father has gone to the market.• Father has gone to Market.Here for the noun word 'বাবা' there can be two Englishwords, ’Father’ and ’Dad’ which can be used alterna-tively and both are correct. Same can be said for theword 'বাজাের'. While most people translated it to theEnglish word ’market’, one user has treated it as propernoun and translated it to ’bazar’. Similarly synonyms ofwords can be used alternatively by different users whiletranslating. For example-3) বাচ্চারা মােঠ িকৰ্েকট েখলেছ।• The kids are playing cricket on the field.• Children are</s>
<s>playing cricket in the field.• Children are playing cricket in the playground.• Kids are playing cricket in the playground.• Children are playing cricket in the field.Here for the word 'বাচ্চারা', two synonymous Englishwords ’kids’ and ’children’ has been used alternativelyand the same thing happened in case of 'মােঠ', which canbe translated to both ’playground’ and ’field’. But thereal problem arouses in case of universal truths. Differ-ent people translate these types of sentences differently.For example-4) দুভর্াগ বান তারাই যােদর পৰ্কৃত বনু্ধ েনই।• Unlucky are those who don’t have real friend.• Those who do not have true friends are unfortunate.• Unlucky are those who don’t have real friends.• Those are unfortunate who do not have true friends.This much variation occurred in this example becauseuniversal truth sentences do not usually have a fixed sen-tence structure. As a result they are perceived differentlyby different people and the translation gets varied.So, from the discussion above we can say that the alternativeuse of nouns, pronouns and synonyms mostly create thevariations in the translation process. The sentences containinguniversal truths also need to be handled differently. So, furtherwork is needed to resolve these issues.VI. CONCLUSIONSCrowd-sourced data can serve as a promising method ofcorpus construction in future. It has the advantage of reflect-ing human behavior while translating from one language toanother. This method needs further analysis and more data toconstruct a complete corpus. Here we worked with an initialdataset to understand this method’s performance and issuesregrading the corpus construction. The issues found in analysisof this data needs to be resolved in further works.References[1] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal,“Sumono: A representative modern bengali corpus,” SUST Journal ofScience and Technology, vol. 21, pp. 78–86, 2014.[2] S. Khan, A. Ferdousi, and M. A. Sobhan, “Creation and analysis of anew bangla text corpus bdnc01,” International Journal for Research inApplied Science & Engineering Technology (IJRASET), vol. 5, 2017.[3] N. S. Dash, B. B. Chaudhuri, P. Rayson, A. Wilson, T. McEnery,A. Hardie, and S. Khoja, “Corpus-based empirical analysis of form,function and frequency of characters used in bangla,” in Published inRayson, P., Wilson, A., McEnery, T., Hardie, A., and Khoja, S.,(eds.)Special issue of the Proceedings of the Corpus Linguistics 2001 Con-ference, Lancaster: Lancaster University Press. UK, vol. 13, 2001, pp.144–157.[4] A. I. Sarkar, D. S. H. Pavel, and M. Khan, “Automatic bangla corpuscreation,” BRAC University, Tech. Rep., 2007.[5] K. M. A. Salam, S. Yamada, and T. Nishino, “Developing the firstbalanced corpus for bangla language,” in Informatics, Electronics &Vision (ICIEV), 2012 International Conference on. IEEE, 2012, pp.1081–1084.[6] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal,“Supara: A balanced english-bengali parallel corpus,” 2012.[7] K. M. Majumder and Y. Arafat, “Analysis of and observations from abangla news corpus,” 2006.[8] J. Shamshed and S. M. Karim, “A novel bangla text corpus buildingmethod for efficient information retrieval,” Journal of ConvergenceInformation Technology, vol. 1, no. 1, pp. 36–40, 2010.[9] Chowdhury and Hossain, Advanced Learner’s Communicative EnglishGrammar & Composition for Class-6 First & 2nd Paper, twenty 1st</s>
<s>ed.Advanced Publication, 2016.[10] “Crowd sourced translator and corpus construction project,” [On-line]. Availa ble: https://github.com/ZakiaRitu/Crowdsource_translator/,last accessed 5 November 2018.</s>
<s>Sequence-to-sequence Bangla Sentence Generation with LSTM Recurrent Neural NetworksScienceDirectAvailable online at www.sciencedirect.comProcedia Computer Science 152 (2019) 51–581877-0509 © 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the scientific committee of the International Conference on Pervasive Computing Advances and Applications – PerCAA 2019.10.1016/j.procs.2019.05.02610.1016/j.procs.2019.05.026 1877-0509© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the scientific committee of the International Conference on Pervasive Computing Advances and Applications – PerCAA 2019.Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University,</s>
<s>Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access</s>
<s>article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Available online at www.sciencedirect.comProcedia Computer Science 00 (2019) 000–000www.elsevier.com/locate/procediaInternational Conference on Pervasive Computing Advances and Applications - PerCAA 2019Sequence-to-sequence Bangla Sentence Generation with LSTMRecurrent Neural NetworksMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, Sheikh Abujarb, Syed AkhterHossaincaStudent, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshbLecturer, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshcDept. Head, Dept. of CSE, Daffodil International University, Dhaka-1207, BangladeshAbstractSequence to sequence text generation is the most efficient approach for automatically converting the script of a word from asource sequence to a target sequence. Text generation is the application of natural language generation which is useful in sequencemodeling like the machine translation, speech recognition, image captioning, language identification, video captioning and muchmore. In this paper we have discussed about Bangla text generation, using deep learning approach, Long Short-term Memory(LSTM), a special kind of RNN (Recurrent Neural Network). LSTM networks are suitable for analyzing sequences of text data andpredicting the next word. LSTM could be a respectable solution if you want to predict the very next point of a given time sequence.In this article we proposed a artificial Bangla Text Generator with LSTM, which is very early for this language and also this modelis validated with satisfactory accuracy rate.c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).Keywords:Language Modeling; Text Generation; NLP; Bangla Text; Sequence-to-sequence; RNN; LSTM, Deep Learning, Machine Learning1. IntroductionRecurrent neural networks are types of neural network designed for capturing information from sequences ortime series data. It is extension of feed forward neural network and different from other in general neural networkarchitectures. It can handle the variable length. In earlier Schmidhuber with Hochreiter, they proposed Long ShortTerm Memory (LSTM) technique in 1997 [19]. It solves the hiding gradient problem by constructing some extrainstruction and very efficient and better then RNN. It was like a revolution over Recurrent Network Networks (RNN).It works well on sequence based task and on any type of sequential data.∗ Corresponding author. Tel.: +880 1736752047E-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2019 The Authors. Published by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).http://crossmark.crossref.org/dialog/?doi=10.1016/j.procs.2019.05.026&domain=pdf52 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019)</s>
<s>51–582 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000RNN can not handle backdrop very well but LSTM can. RNN have limitation of memory but LSTM don’t haveany limitation of memory problen in long going dependency. RNN suffers from the same vanishing (or less notoriousexploding) gradient problem as fully connected networks but LSTM can vanish gradient properly. LSTM is betterthan RNN because LSTMs are unequivocally intended to stay away from the long haul reliance issue. Recollectingdata for significant lots of time is for all intents and purposes their default conduct, not something they battle to learn.1.1. Dataset PropertiesThe neural network we made was trained with Bangla newspaper corpus. We collected a newspaper corpus of 917days newspaper text from Prothom Alo online. The web scraping with Python was helped a lot for doing this workautomatically. The training dataset contains the properties like-• Total 917 days newspaper text.• The daily newspaper text contains average 4500 sentences.• 4500 sentences contains 12,500 words.• 12,500 words contains about 15,5000 characters in average.2. Literature ReviewWe are proposing a model which can generate sequence-to-sequrnce Bangla Text. There are many researchand development works in this field. But hardly we can find text generation related works with LSTM for Banglalanguage. That’s why we determined to make our own dataset and our own prediction model.Naveen Sankaran et al. proposed a formulation, where they recognized a task which makes model as training of asequential translation method [1]. They worked for converting words from a document into Unicode sequence directly.Praveen Krishnan and et al. introduced an OCR system which pursues a combined architecture in seven differentlanguages of India and a segmentation free method [2]. Their system was proposed to assist the continuous learningin the time of being it usable, like continuous user input. They worked with BLSTM method, another form of generalLSTM.A character-based encoder-decoder model which is acquired to transliterate sequence to sequence consists byAmir H. Jadidinejad [3]. The proposed an encoder built with Bidirectional Recurrent Neural Network that encodes asequence of symbols into vector reprsentation with fixed length.The effects of the SIGMORPHON 2016 combined task specified that the attentional sequence-to-sequence modelof Bahdanauet is proper for this task [4] [5].Robert Ostling and Johannes Bjervas proposed a model which was constructed with sequence-to-sequenceartificial neural network and LSTM architecture that was a big attention to enthusiasts[6].Yasuhisa Fujii et al. considered line-level script documentation papers in the context of multilingual OCR.They considered some alternatives of an encoder-summarizer method in the framework of an up-to-date multilin-gual OCR structure and they used an estimate set of several-domain streak photos from 232 languages in 30 scripts [7].A DNN based SPSS system was made by Sivanand Achanta and et al. which is representing the audio parametricclassifications of things with a single vector by sequence-to-sequence auto-encoders [8].Mikolov et al. have established the importance of allocated images and the competence to model randomlyextensive needs using Current RNN based language models[9] [10]. Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 53Md. Sanzidul Islam et al.</s>
<s>/ Procedia Computer Science 00 (2019) 000–000 3Sutskever et al. produce significant sentences by modifying a RNN as well through acquiring from a character-levelcorpus. [11]. They introduced a newly made RNN model that one works as multiplicative connectors.Karpathy and et al. have ensured that an RNNLM is more effective of making image explanations on thepre-trained model by training the neural network model with RNN[12]. They tried to construct a model architectureof multimodal RNN.Zhang and Lapata are also explains remarkable work using RNNs to create Chinese poetry [13]. It was a goodinitiative in that time which could able to generate some lines of chinese poem autometically.Mairesse and Young suggested a phrase-based NLG method was proposed on factored LMs that can realize after asemantically united corpus [14]. They focused their crowd sourced data and shown how to work with that.Even though active learning was similarly recommended to accept absorbing online directly from operators, thenecessity for human interpreted alignments boundaries the scalability of the scheme by Mairesse et al. [15].One more related approach throws NLG as a pattern extraction and matching problem by Angeli et al. [16].Kondadadi et al. display that the outputs can be more improved by an SVM ranker creating them equivalent tohuman-authored texts [17]. They proposed a approach by end-to-end generation technique with some local decisions.Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney suggest a novel sequence-to-sequence model togenerate captions for videos. They made explanations with a sequence-to-sequence model, where frames are firstread sequentially and then words are made serially [18].3. Method Discussion3.1. RNN StructureThe LSTM network is a special type of RNN. The RNN is neural network which attempts to model sequence ortime dependent in regular behavior. This is done by output feeding back of a neural net layer in time t to the inputlayer at time-t + 1 (1)It looks like this [20]-Fig. 1. Sequential nodes of Recurrent Neural Network.54 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–584 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Recurrent Neural Networks could be described as Unrolled programmatically at the time of training and testing.So, we can see something like [20]-Fig. 2. Unrolled Recurrent Neural Network.The figure showing here a new word is being supplied in every step with the previous output (i.e. ht-1) and thatone also being supplied at next.Basically, RNNs are amazingly able to handle the long-term dependencies. The issue was noticed in details byHochreiter (1991) and Bengio, et al. (1994), who showed some pretty basic causes why it might be difficult. Thatswhy we will use LSTM, a better form of RNN.3.2. LSTM NetworksThe LSTMs are called Long Short Term Memory (LSTM) which are a special type of RNN, capable of learninglong-term dependency problem. Thhis one was discovered by Schmidhuber and Hochreiter in 1997, and were updatedand spreaded by many people in that work.LSTMs are actually made for avoiding the long-term dependency issue. Keeping information in long periods of timeis their actual default behavior, nothing what they struggle to be trained! The graphical representation</s>
<s>of LSTM cellcould be shown as below [20]-Fig. 3. LSTM Cell Diagram. Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 554 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Recurrent Neural Networks could be described as Unrolled programmatically at the time of training and testing.So, we can see something like [20]-Fig. 2. Unrolled Recurrent Neural Network.The figure showing here a new word is being supplied in every step with the previous output (i.e. ht-1) and thatone also being supplied at next.Basically, RNNs are amazingly able to handle the long-term dependencies. The issue was noticed in details byHochreiter (1991) and Bengio, et al. (1994), who showed some pretty basic causes why it might be difficult. Thatswhy we will use LSTM, a better form of RNN.3.2. LSTM NetworksThe LSTMs are called Long Short Term Memory (LSTM) which are a special type of RNN, capable of learninglong-term dependency problem. Thhis one was discovered by Schmidhuber and Hochreiter in 1997, and were updatedand spreaded by many people in that work.LSTMs are actually made for avoiding the long-term dependency issue. Keeping information in long periods of timeis their actual default behavior, nothing what they struggle to be trained! The graphical representation of LSTM cellcould be shown as below [20]-Fig. 3. LSTM Cell Diagram.Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000 54. Proposed Methodology4.1. Dataset PreprocessingWorking with Bangla is too much difficult still now as there has no much resource and R&D works in this field.So, processing Bengali text data a difficult task as these are too noisy and also not suitable for working with ma-chine learning or deep learning approaches. We did some preprocessing work for making our dataset noise-free andperforming its best in neural network, like-• Removed all Bengali punctuation marks.• Removed extra spaces and new lines• Converted the text into utf-8 format.4.2. Proposed MethodIn general an LSTM network is complex comparative to other methods. It consume a much power in hardware ansmachines capability. The whole interior activities and logic flow could be presented as below-1) Input: Firstly, The input is squashed with the tanh activation function between-1 and 1. This could be expressed by-g = tanh(bg + ttUg + ht−1Vg) (2)Where UgUg and VgVg are the previous weights of cell output and inputs. In other side bgbg is performing as aninput bias. Remember, the exponents (g) is only considering as input weights.i = σ(bi + xtUi + ht−1Vi) (3)The equation 4 is considered as output of LSTM input section-g ◦ i (4)Here the ◦ is elements-wise multiplication.2) Forget state loop and gate: the output forgotten gate expression is-f = σ(b f + xtU f + ht−1V f ) (5)The product output shows the position of previous state and forgotten gate. The equation for this calculation is-st−1 ◦ f (6)The output of forgotten loop is calculated in another strategy. For different time frame-st = st−1 ◦ f + g ◦ i (7)3) Output gate: Necessary output gate is evolved as-◦ = σ(b◦ +</s>
<s>xtU◦ + ht−1V◦) (8)So that the cell final output, with tanh squashing, can be expressed as-ht = tanh(st) ◦ O (9)56 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–586 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000Finally, a very common form of LSTM networks equation can be written from Colahs famous blog post [21]-Fig. 4. LSTM networks equation.That’s how the Long-short-term-memory (LSTM) network do the operations sequentially. That’s why it performsuperior in any type of sequential data. The LSTM network activity flow could be presented as the figure given below.There we can notice some time evaluation term what’s for LSTM is different.Fig. 5. LSTM networks activity flow.4.3. Layer DescriptionGenerally a neural network contains three layers for taking input, doing calculation and giving decision. A inputembedding layer was taken as initial layer of neural network as input layer. Here a single line of text is being trainedone after one and sequentially.Then the hidden layer was taken place. It could be explain as the main LSTM layer and did it for 100 units.The final and output layer is described now. An activation function is applied here named softmax. Softmax cal-culates the probability of event distribution over n events. This function generally calculates the probabilities of eachtarget class across all possible target classes.S (yi) =eyii eyi(10) Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–58 57Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000 74.4. Model ValidationThe LSTM model is little different in validation perspective. Performance determination with cross validation ortrain-test accuracy in general like CNN model [22] is not practical. It actually better to test the model with real dataand its output. We trained only one weeks news paper corpus for having limitation of hardware limitation. And finallydid test with different Bengali words, then the model generated some text according to previous text. Here are twogenerated Bangla sentences with our model-Fig. 6. Testing model (example-1).Fig. 7. Testing model (example-2).5. Future WorkIn this paper we worked with less data, due to hardware limitations. Afterwards we will enhance our dataset.In future we will improve the model for achieving multi task sequence to sequence text generation and multi waytranslation like Bengali articles, caption generation. Furthermore we would aim to pursue the possibility of extendingour model to Bangla regional languages. We also has plan to work with Bangla Sign Language [23] generation withsequential image data as like general people language.References[1] Naveen Sankaran T, Aman Neelappa, C V Jawahar, Devanagari Text Recognition: A Transcription Based Formulation, 12th InternationalConference on Document Analysis and Recognition, 25-28 Aug. 2013, Washington DC, USA.[2] Praveen Krishnan, Naveen Sankaran T, Ajeet Kumar Singh, C V Jawahar, Towards a Robust OCR System for Indic Scripts, International Work-shop on Document Analysis Systems, Centre for Visual Information Technology, International Institute of Information Technology Hyderabad- 500 032, INDIA, April 2014.[3] Amir H. Jadidinejad, Neural Machine Transliteration: Preliminary Results, arXiv:1609.04253v1 [cs.CL] 14 Sep 2016.[4] Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner,</s>
<s>and Mans Hulden. The sigmorphon 2016 shared task:Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, andMorphology. Association for Computational Linguistics, Berlin, Germany, pages 1022, 2016.[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, Neural machine translation by jointly learning to align and translate, CoRRabs/1409.0473, 2014.[6] Robert Ostling and Johannes Bjerva, SU-RUG at the CoNLLSIGMORPHON 2017 shared task: Morphological Inflection with AttentionalSequence-to-Sequence Models, arXiv:1706.03499v1 [cs.CL] 12 Jun 2017.[7] Yasuhisa Fujii, Karel Driesen, Jonathan Baccash, Ash Hurst and Ashok C. Popat, Sequence-to-Label Script Identification for MultilingualOCR, Google Research, Mountain View, CA 94043, USA, arXiv:1708.04671v2 [cs.CV] 17 Aug 2017.[8] Sivanand Achanta, KNRK Raju Alluri and Suryakanth V Gangashetty, Statistical Parametric Speech Synthesis Using Bottleneck Representa-tion From Sequence Auto-encoder, Speech and Vision Laboratory, IIIT Hyderabad, INDIA, arXiv:1606.05844v1 [cs.SD] 19 Jun 2016.[9] Tomas Mikolov, Martin Karafit, Lukas Burget, JanC ernocky, and Sanjeev Khudanpur, Recurrent neural network based language model, InProceedings on InterSpeech, 2010.[10] Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan H. Cernocky and Sanjeev Khudanpur, Extensions of recurrent neural network languagemodel, In ICASSP, 2011 IEEE International Conference on, 2011.58 Md. Sanzidul Islam et al. / Procedia Computer Science 152 (2019) 51–588 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2019) 000–000[11] Ilya Sutskever, James Martens and Geoffrey E. Hinton, Generating text with recurrent neural networks, In Proceedings of the 28th InternationalConference on Machine Learning (ICML-11), ACM, 2011.[12] Andrej Karpathy and Li Fei-Fei, Deep visual semantic alignments for generating image descriptions, CoRR, 2014.[13] Xingxing Zhang and Mirella Lapata, Chinese poetry generation with recurrent neural networks, In Proceedings of the 2014 Conference onEMNLP, Association for Computational Linguistics, October, 2014.[14] Francois Mairesse and Steve Young, Stochastic language generation in dialogue using factored language models, Computer Linguistics, 2014.[15] Francois Mairesse, Milica Gasic, Filip Jurccek, Simon Keizer, Blaise Thomson, Kai Yu and Steve Young, Phrase-based statistical languagegeneration using graphical models and active learning, In Proceedings of the 48th ACL, ACL 10, 2010.[16] Gabor Angeli, Percy Liang, and Dan Klein, A simple domainindependent probabilistic approach to generation, In Proceedings of the 2010Conference on EMNLP, EMNLP 10, Association for Computational Linguistics, 2010.[17] Ravi Kondadadi, Blake Howald, and Frank Schilder, A statistical nlg framework for aggregated planning and realization In Proceedings of the51st Annual Meeting of the ACL, Association for Computational Linguistics, 2013.[18] Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell and Kate Saenko, Sequence to Sequence Videoto Text, arXiv:1505.00487 [cs.CV] or arXiv:1505.00487v3 [cs.CV] 19 Oct. 2015.[19] Hochreiter, Sepp, and Jrgen Schmidhuber. Long short-term memory. Neural computation 9.8 (1997): 1735-1780.[20] Adventuresinmachinelearning.com, Keras LSTM tutorial How to easily build a powerful deep learning language model, 2018. [Online]. Avail-able: http://www.adventuresinmachinelearning.com/keras-lstm-tutorial/ . [Accessed: 14- Aug- 2018].[21] Colah.github.io, Understanding LSTM Networks, 2015. [Online]. Available: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ .[Accessed: 14- Aug- 2018].[22] Islam, Sanzidul, et al. ”A Potent Model to Recognize Bangla Sign Language Digits Using Convolutional Neural Network.” Procedia computerscience 143 (2018): 611-618.[23] Islam, Md Sanzidul, et al. ”Ishara-Lipi: The First Complete MultipurposeOpen Access Dataset of Isolated Characters for Bangla Sign Lan-guage.” 2018 International Conference on Bangla Speech and</s>
<s>Language Processing (ICBSLP). IEEE, 2018.</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/308867642Design and implementation of an efficient DeConverter for generating Banglasentences from UNL expressionConference Paper · June 2015DOI: 10.1109/ICIEV.2015.7334006CITATIONSREADS4 authors:Some of the authors of this publication are also working on these related projects:COVID-19 and psychiatric health View projectData Extraction from Natural Text View projectDr. Aloke Kumar SahaUniversity of Asia Pacific27 PUBLICATIONS 54 CITATIONS SEE PROFILEM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEMolla Rashied HusseinUniversity of Asia Pacific14 PUBLICATIONS 21 CITATIONS SEE PROFILEJ. K. DasJahangirnagar University26 PUBLICATIONS 61 CITATIONS SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 26 September 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/308867642_Design_and_implementation_of_an_efficient_DeConverter_for_generating_Bangla_sentences_from_UNL_expression?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/308867642_Design_and_implementation_of_an_efficient_DeConverter_for_generating_Bangla_sentences_from_UNL_expression?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-psychiatric-health?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Data-Extraction-from-Natural-Text?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Jahangirnagar_University?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/J_Das3?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-f17b5c5e0c19c79bd4425319f4e04e69-XXX&enrichSource=Y292ZXJQYWdlOzMwODg2NzY0MjtBUzo1NDI3MDY0MzA2NDAxMjlAMTUwNjQwMjcwNDI5Ng%3D%3D&el=1_x_10&_esc=publicationCoverPdfDesign and Implementation of an Efficient DeConverter for generating Bangla Sentences from UNL Expression Aloke Kumar Saha Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh aloke71@yahoo.com Md. Firoz Mridha Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh mdfirozm@yahoo.com Molla Rashied Hussein Computer Science & Engineering University of Asia Pacific Dhaka, Bangladesh mrh.cse@uap-bd.edu Jugal Krishna Das Computer Science & Engineering Jahangirnagar University Savar, Dhaka, Bangladesh cedas@juniv.edu Abstract—In this paper, the design and implementation of Bangla DeConverter for DeConverting Universal Networking Language (UNL) expressions into the Bangla Language is propounded. The UNL is an Artificial Language, which not only facilitates the translation stratagem between all the Natural Languages across the world, but also proffers the unification of those Natural Languages as well. DeConverter is the core software contrivance in a UNL system. The paper also focuses on the Linguistic Analysis of Bangla Language for the DeConversion process. A set of DeConversion rules have been burgeoned for converting UNL expression to Bangla. Experimental result shows that these rules successfully generate correct Bangla text from UNL expressions. These rules can currently produce basic and simple Bangla sentences; however, it is being aggrandized to superintend advanced and complex sentences. Keywords- DeConverter, EnConverter, Machine Translation, UNL. I. INTRODUCTION In this era of Information Technology (IT), World Wide Web (WWW) has become the nucleus of essential information. However, a large amount of resources is still beyond the reach of a significant portion of society just because of the man-made Language Barrier. There is a great need to translate digital contents which include but not limited to Websites, Blogs, Online News Portal, E-books, E-Journals and E-mails into the Native Language for overwhelming that Language Barrier. In this multilingual milieu, Machine Translation (MT) is considered as an important tool to unshackle the cordoned mankind. UNL based MT (developed with an interlingua-based approach) is also an effort in this approach. UNL program was primarily launched b a c k in 1996 in the Institute of Advanced Studies (IAS) of United Nations University (UNU), Tokyo, Japan [1], and it is currently supported by the Universal Networking Digital Language (UNDL) Foundation, an autonomous organization founded as an extension of that UNL program afterwards in 2001 with a Head-Quarter (HQ) situated at Geneva, Switzerland [2]. The approach in UNL</s>
<s>pertains to the development of the EnConverter and the DeConverter for a Natural Language. EnConverter is used to convert a given sentence into a Natural Language to an equivalent UNL expression, and DeConverter is used to do the vice versa, i.e. to convert a given UNL expression to an equivalent Natural Language sentence. A UNL system has the potential to knock down Language Barriers across the world with the development of optimal 2n components, wh e reas the traditional approaches require shoddy n(n−1) components, where n is the number of L anguages. In this paper, design and development of a Bangla DeConverter has been accorded by accentuating the DeConversion rules and Semantic ambiguity of the Bangla DeConverter. Syntactic alignment is the process of defining arrangements of words in target output. This phase plays a vital role in the accuracy of the generation process. II. UNL SYSTEM AND ITS STRUCTURE The UNL system consists of two core tools, namely EnConverter and DeConverter, which are used for the particular Natural Language Processing (NLP), a major branch of Artificial Intelligence (AI). The process of converting a source Language, i.e. Natural Language expression into the desired UNL expression is referred to as EnConversion, and the process of converting a UNL expression into a target or destination L a n g u a g e , i.e. the desired N a t i v e L a n g u a g e e x p r e s s i o n i s referred to as DeConversion. The EnConverter and DeConverter for a Language form a Language Server that may reside inside the Internet. Both the EnConverter and the DeConverter perform their functions on the basis of a set of Grammar rules and a Word Dictionary of Native Language. UNL representation consists of UNL relations, UNL Attributes (UAs) and Universal Words (UWs). UWs are represented by their English equivalents. These words are listed in the Universal Word Lexicon of UNL knowledge base [6]. Relations are the building blocks of UNL sentences. The relations between the words are drawn from a set of predefined relations [3, 4, 7, 8, 9, 10]. The attribute labels are attached with UWs to provide additional information like Tense, Numbers etc. For example, “কিরম কলা খায়” in English “Karim eats banana” can be represented into UNL expression as: {unl} agt(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,karim(icl>name>abstract_thing,com>male,nam<person)) obj(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,banana(icl>herb>thing)) {/unl} Here, it should be noted that agt is the UNL relation which indicates “a thing which initiates an action”; obj is another UNL relation which indicates “a thing in focus which is directly affected by an event”; @entry and @present are UNL attributes which indicate the main Verb and Tense information; and @sg is UNL attribute which indicates the Number information. III. HOW DECONVERTER WORKS The DeConverter is a Language Independent Generator (LIG), which provides a Framework for Syntactic and Morphological generation of Native Language. It can convert UNL Expressions into Natural Languages using corresponding Word Dictionaries and sets of DeConversion Rules for converting to the desired</s>
<s>Native i.e. Target Languages. A Word Dictionary contains the Information of Words, which correspond to UWs that are included in the UNL Expressions input and the Grammatical Attributes (GAs), which describe the behaviors of the Words. DeConversion Rules describe how to construct a Sentence using the Information from the UNL Expressions input and define in a Word Dictionary. The DeConverter converts UNL Expressions into Sentences of a Target Language following the descriptions of the Generation Rules. The UNL Ontology is also helpful when no corresponding Word for a particular UW exist in that Language. In this case, the DeConverter consults to the UNL Ontology and tries to find a more general UW, of which a corresponding Word exists in its word dictionary and consequently use the word of the upper UW to generate the Target Sentence. The DeConverter works in the following way. First, it transforms the input of a UNL expression, a set of binary relations, into a Directed Graph (DG) structure with Hyper-Nodes called Node-Net. The Root Node of a Node-Net is called Entry Node and represents the Head (e.g. the main Verb) of a Sentence. DeConversion of a UNL Expression is carried out by applying Generation Rules to the Nodes of Node-Net. It starts from the Entry Node, to find an appropriate Word for each Node and generate a Word sequence (a list of words in grammatical order) of the Target Language. In this process, the Syntactic structure is determined by applying Syntactic Rules, and Morphemes are similarly generated by applying Morphological Rules. The DeConversion process ends when all words for all Nodes are found and a Word sequence of the Target Sentence is completed. Fig. 1 shows the structure of the DeConveter. “G” indicates Generation Windows, and “C” indicates Condition Windows of the DeConverter. The DeConverter operates on the Node-List through Generation Windows. Condition windows are used to check conditions when applying a Rule. In the initial stage, on the contrary to the EnConveter [5], the Entry Node of a UNL Expression exists in the Node-List. At the end of DeConversion, the Node-List is the List of all Morphemes, with each as a Node, that are converted from the Node-Net and constitute the Target Sentence [6]. Figure 1. DeConverter structure IV. DESIGN AND IMPLEMENTATION OF BANGLA DECONVERTER DeConverter makes use of Language-Independent (LI) and Language-Dependent (LD) components during the generation process [11]. The first stage of DeConverter is the UNL parser, which parses an UNL expression input and builds a Node-Net from that UNL expression input. During the Lexeme selection stage, Bangla Root Words and their Dictionary Attributes are selected for the given UWs in that UNL expression input from the Bangla-UW Dictionary. After that, Nodes are ready for generation of Morphology according to the Target Language in the Morphology phase. In this stage, the Root Words may be changed; i.e., something can be added or removed to obtain the complete sense of Words. The system makes use of Morphological Rules for this purpose. In the</s>
<s>Function Word insertion phase, Function Words or Case Markers, such as িট, টা, খানা, খািন, র, eর, য়, েত, েথেক, েচেয় etc. are inserted to the Morphed Words. These Function Words are inserted in a generated Sentence, based on the Rule-based design in this situation [12]. Finally, the Syntactic Linearization phase is used to define Word order in the generated Sentence, so that the output matches to a Natural Language Sentence [13]. Working of Bangla DeConverter is illustrated with an example Sentence given below: Bangla Sentence: েছেলরা মােঠ ফুটবল েখেল। Transliterated Sentence: Chelara mathe football khele. Equivalent English Sentence: Boys play football in the field. The UNL expression for example Sentence is given below: {unl} agt(play(icl>compete>do,agt>thing).@entry.@present,boy(icl>child>person.@pl) man(play(icl>compete>do,agt>thing).@entry.@present,football(icl>field_game>thing)) plc(football(icl>field_game>thing),field(icl>tract>thing).@def) {/unl} To convert UNL expression to the Bangla Natural Language Sentence, Bangla DeConverter is used. The UNL expression acts as input for the Bangla DeConverter [14]. The UNL parser checks the input UNL expression for errors and generates the Node-Net. The Lexeme selection phase populates the Node-List with the equivalent Bangla Words for the UWs given in the UNL expression input. The populated Node-List is given below: Node1: Bangla word: েখেল; UW:play(icl>compete>do,agt>thing).@entry.@present Node2: Bangla word: েছেলরা; UW: boy(icl>child>person.@pl) Node3: Bangla word: ফুটবল; UW: football(icl>field_game>thing Node4: Bangla word: মােঠ; UW: field(icl>tract>thing).@def) In the Morphology phase, Morphological Rules are applied to modify Bangla Words stored in the Nodes according to UNL Attributes given in the UNL expression input and Dictionary Attributes retrieved from the Bangla-UW Dictionary [14]. The Nodes are processed by the Morphological Rules. It is evident that, in the Morphology phase, েখল ‘play’ is changed to েখলা ‘played’ and েছেল ‘boy’ is changed to েছেলরা by Morphological Rules. The Function Word insertion phase inserts Function Words in the Morphed Lexicon [15]. Nodes processed by the Function Word insertion phase are given below: In this phase, Case Markers রা and e ‘in’ are added to Node2 and Node4, respectively, according to the Function Word Rule-based insertion. In the Syntactic Linearization phase, one traverses the Nodes in a specific sequence based on the Syntactic Rule-based Linearization for Bangla Language [16]. The sequence for processing of Nodes and the Bangla Sentence generated by this sequence is given below: Node2 Node4 Node3 Node1 েছেলরা মােঠ ফুটবল েখেল। It is evident from the generated Bangla Sentence that the system is able to convert an UNL expression input into a Bangla Natural Language Sentence successfully. The descriptions of different phases of the Bangla DeConverter are given in the following segment. A. Morphology generation The System makes use of Generation Rules during this process. These Generation Rules are designed on the basis of Bangla Morphological analysis. There are three Categories of Morphology that have been identified for the purpose of converting a UNL expression to equivalent Bangla Language Sentences. They are: i) Attribute Label Resolution Morphology, ii) Relation Label Resolution Morphology, and iii) Noun, Adjective, Pronoun, and Verb Morphology. Among these three, major two Morphologies are discussed in details as follow: i) Attribute Label Resolution Morphology deals with generation</s>
<s>of Bangla Words on the basis of UNL attributes attached to a Node and its Grammatical Attributes retrieved from Lexicon. The Root Words retrieved from Bangla-UW dictionary are modified in this phase, depending on their Gender, Number, Person, Tense, Aspect, Modality, and Vowel ending Information. ii) Relation Label Resolution Morphology manages the Prepositions in English or Postpositions in Bangla, because Prepositions in English are similar to Postpositions in Bangla. These link Noun, Pronoun, and Phrases to other parts of the sentence. Some insertion of Function Words in generated output depends upon UNL Relation and Conditions imposed on Parent and Child nodes’ Attributes in a Relation. A Rule Base has been prepared for this purpose. For each of 46 UNL Relations, different Function Words are used depending upon the grammatical details of a Target Language [17]. This Rule Base consists of nine Columns. The Attributes whose absence needs to be asserted on the Child Node for firing of the rule are stored. If there is more than one Attribute that needs to be asserted on a given Node for firing of a rule, then they are stored in the Rule Base with the separation of ‘#’ sign. Here, Attributes represent UNL Attributes (obtained from a given UNL expression) or Lexical Attributes (obtained from the Bangla-UW Dictionary) of a Node. The Rule Base for Function Word insertion is illustrated with an example Rule given below: agt:null:null:null::@present#V:VINT#@progress#েখল:N#3rd:1st#2nd Where ‘agt’ is a UNL relation under consideration, and firing of the given rule will result into insertion of Function Word ে◌ following the Child Node in the generated output, because the Function Word appears in the Fifth Column and the Second, Third, and Fourth Columns contain ‘null’ in the Rule. The Sixth Column contains ‘@present#V’, which means that the Rule will be fired if the parent of ‘agt’ relation contains ‘@present’ as its UNL Attribute in the given UNL expression input and has a ‘V’ as its Lexical Attribute in Bangla-UW Dictionary. The Seventh Column contains ‘VINT#@ progress#েখল’ which refers to the Attributes whose absence needs to be asserted on the Parent Node for firing of the Rule. It means that the Parent Node should not contain ‘VINT’ (Intransitive Verb), ‘েখল’ (‘play’ Verb) Attributes in the Lexicon or the ‘@progress’ Attribute in the Parent Node of UNL expression. The Eighth Column of the Rule contains ‘N#3rd’ which refers to the Attribute whose presence needs to be asserted on the Child Node for firing of the Rule; i.e., the Child should have an ‘N’ (Noun) and ‘3rd’ (Third Person) attribute in the Bangla-UW Dictionary. The Ninth Column contains ‘1st# 2nd’ which refers to the Attribute whose absence needs to be asserted on the Child Node for firing of the Rule. It means that the Child Node should not refer to the First Person or the Second Person in the Sentence [18]. Thus, if the relation ‘agt’ has a Parent Node with an ‘@present’ and ‘V’ Attribute, without ‘VINT’, ‘েখল’, ‘@progress’, or ‘@custom’ Attribute, or has a</s>
<s>Child Node with an ‘N’ and ‘3rd’ Attribute and without a ‘1st’ or ‘2nd’ Attribute, then Function Word ে◌ will be inserted following the Child Node in generated output [19]. For example, in UNL relation ‘agt(play(agt> human, obj>game). @present.@entry, boy (icl> maleperson))’ of UNL expression, the Parent Node of Relation ‘agt’ is ‘play(agt>human, obj>game)’ having ‘V’ and ‘@past’ Attribute and without the ‘VINT’ Attribute in the Lexicon. The Child Node of ‘agt’ relation is ‘boy(icl>male child)’ that has ‘N’ and ‘3rd’ Attribute and does not have ‘1st’ and ‘2nd’ Attributes in the Lexicon. As a result, the firing of Rule will occur and thus the generation of Function Word ে◌ followed by Child Node ‘boy(icl> male child)’ will be in the generated output as below: V. EXPERIMENTAL RESULT AND TESSTING SYSTEM The System has been tested on several UNL Expressions. It has been observed that the System successfully deals with the resolution of UNL Relations and generates Attributes for those Sentences. The System has been tested with the help of UNL Expressions available in the Russian UNL Language Server. The given English sentences were manually translated at Russian Language Server into equivalent UNL Expressions and then those equivalent UNL Expressions were placed into the proposed UNL-Bangla DeConverter mechanism. Comparative analysis is presented in Table 1 for 5 (Five) Sentences. Accuracy will arise with more tested Sentences and appending Rules. The GUI of Bangla DeConverter is classified into the following three Windows: (1) Bangla Testing Server (2) DeConversion (3) Intermediate output And Figure 2 shows the Bangla DeConverter Input and Figure 3.shows the Bangla DeConverter Output which is generated by the proposed Bangla DeConverter Figure 2. Bangla DeConverter Input Figure 3. Bangla DeConverter Output Sl. UNL Expressions generated by the Russian UNL language server Relations ResolvedEquivalent English Sentence Bangla Sentences generated by DeConverter {unl} agt(read(icl>see>do,agt>person,obj>information).@entry.@present.@progress,kerim(icl>name>abstract_thing,com>male,nam<person)){/unl} agt Karim is reading. “কিরম পিড়েতেছ” Karim Poritechhe. 2. {unl} agt(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,i(icl>person)) obj(eat(icl>consume>do,agt>living_thing,obj>concrete_thing,ins>thing).@entry.@present,rice(icl>grain>thing)) {/unl} agt ,obj I eat rice. “আিম ভাত খাi” Aami vat khai. 3. {unl} agt(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,he(icl>person)) obj(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,note(icl>personal_letter>thing).@indef) ins(write(icl>do,agt>person,obj>concrete_thing,ins>functional_thing).@entry.@past,pen(icl>writing_implement>thing).@indef) {/unl} agt, ins, obj He wrote a Note with a pen. “েস কলম িদেয় eকিট েনাট িলেখিছল” Se kolom die ekti note likhechhilo. 4. {unl} obj(fly(icl>move>occur,equ>wing,com>air,plt>thing,plf>thing,obj>concrete_thing,plc>thing,ins>thing).@entry.@present,bird(icl>vertebrate>thing).@def) plf(fly(icl>move>occur,equ>wing,com>air,plt>thing,plf>thing,obj>concrete_thing,plc>thing,ins>thing).@entry.@present,nest(icl>retreat>thing).@def) {/unl} agt, frm, obj The bird flies from the nest. “পািখিট বাসা েথেক uেড় যায়” Pakhiti basha theke ure jae. 5. {unl} aoj(live(icl>be,com>style,aoj>person,man>uw).@entry.@present,we(icl>group).@pl) plc(live(icl>be,com>style,aoj>person,man>uw).@entry.@present,dhak{/unl} aoj, plc We live in Dhaka. “আমরা ঢাকায় থািক” Amra dhakae thaki. Table I. Bangla Sentences generated by the DeConverter with their corresponding UNL expressions Input VI. CONCLUSION AND FUTURE WORK In this paper, a Rule-Based Bangla DeConverter have been proffered. These Rules can currently convert simple UNL Expressions to Bangla Sentences. It is being aggrandized to superintend advanced and complex Sentences. The proposed System has been tested for more than 2000 UNL Expressions. This System achieved accuracy of as good as 89%, which can be marked outstanding in this Field of Study. Moreover, a Web interface has been designed for online DeConversion of the UNL expression to the corresponding Bangla Sentence. It empowers the Bangla</s>
<s>Readers to read the sentences in their Local Language, even though those sentences were written initially in a different Language, by having converted through their equivalent UNL expressions presented on the Web. This System will also provide an opportunity for the Researchers to work on MT to explore and expand the UNL beyond its limit to construct the Interlingua Utopia, where Language will no longer be a obstacle for Mankind. Knowledge should not be contained in a jar, rather be let diffuse in an open atmosphere. REFERENCES [1] UNL System. Website Link: http://www.unl.ru/system.html. Date of last retrieval: March 14, 2015. [2] Universal Networking Language Portal. Website link: http://www.undl.org. Date of last retrieval: March 14, 2015. [3] H. Uchida, M Zhu. T.G. Della Senta. Universal Networking Language, 2005/6-UNDL Foundation, International Environment House. [4] H. Uchida, M. Zhu. The Universal Networking Language (UNL) Specification Version 3.0 Edition 3 , Technical Report, UNU, Tokyo:, 2005/6-UNDL Foundation, International Environment House, 2004 [5] EnConverter Specification, Version 3.3, UNL Center/UNDL Foundation, Tokyo 150-8304, Japan 2002 [6] DeConverter Specification, Version 2.7, UNL Center, UNDL Foundation, Tokyo 150-8304, Japan 2002 [7] D.M. Shahidullah. Bangla Baykaron, Dhaka: Ahmed Mahmudul Haque of Mowla Brothers prokashani, 2003 [8] D. C. Shuniti Kumar. Bhasha-Prakash Bangala Vyakaran, Calcutta : Rupa and Company Prokashoni, July 1999, pp.170-175 [9] Humayun Azad. Bakkotottyo - Second edition, Dhaka: Bangla Academy Publishers, 1994 [10] D. S. Rameswar. Shadharan Vasha Biggan and Bangla Vasha, Pustok Biponi Prokashoni, November 1996, pp. 358-377. [11] M.N.Y. Ali, J.K. Das, S. M. Abdullah Al Mamun, M. E. H. Choudhury, “Specific Features of a Converter of Web Documents from Bengali to Universal Networking Language”, International Conference on Computer and Communication Engineering 2008 (ICCCE’08), Kuala Lumpur, Malaysia, pp. 726-731. [12] M.N.Y. Ali, J.K. Das, S.M. Abdullah Al Mamun, A. M. Nurannabi. “Morphological Analysis of Bangla words for Universal Networking Language”, International Conference on Digital Information Management (ICDIM), 2008, London, England, pp. 532-537 [13] M.N.Y.Ali, A. M. Nurannabi, G. F. Ahmed, J.K. Das. “Conversion of Bangla Sentence for Universal Networking Language”, International Conference on Computer and Information Technology (ICCIT), Dhaka, 2010 pp. 108-113 [14] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Dictionary Entries for Bangla Consonant Ended Roots in Universal Networking Language”, International Journal of Computational Linguistics (IJCL), Volume (3) : Issue (1) :2012, pp. 79-87. [15] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Outlining Bangla Word Dictionary for Universal Networking Language”, IOSR Journal of Computer Engineering (IOSRJCE), ISSN: 2278-0661, Sep-Oct. 2012. [16] M. Z. H. Sarker, M.N.Y. Ali, J.K. Das, “Development of Dictionary Entries for the Bangla Vowel Ended Roots for Universal Networking Language”, International Journal of Computer Applications (IJCA), 52(19), August 2012. Published by Foundation of Computer Science, New York, USA, pp 38-45. [17] Aloke Kumar Saha Muhammad F. Mridha and Jugal Krishna Das, “Analysis of Bangla Root Word for Universal Networking Language (UNL)”, International Journal of Computer Applications (IJCA) (0975 – 8887), Volume 89 – No.17, March 2014. [18] Muhammad F. Mridha, Aloke Kumar Saha and Jugal Krishna Das, “New Approach</s>
<s>of Solving Semantic Ambiguity Problem of Bangla Root Words Using Universal Networking Language (UNL)”, 3rd International Conference on Informatics, Electronics & Vision(ICIEV), 23-24 May, 2014. [19] Muhammad F. Mridha, Aloke Kumar Saha, Mahadi hasan and Jugal Krishna Das, “Solving Semantic Problem of Phrases in NLP Using Universal Networking Language (UNL)” International Journal of Advanced Computer Science and Applications (IJACSA), Special Issue on Natural Language Processing(NLP) 2014. View publication statsView publication statshttps://www.researchgate.net/publication/308867642 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSansMM /AdobeSerifMM /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellGothicStd-Black /BellGothicStd-Bold /BellGothicStd-Light /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10</s>
<s>/CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EuroSig /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KozGoProVI-Medium /KozMinProVI-Regular /KristenITC-Regular /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LetterGothicStd /LetterGothicStd-Bold /LetterGothicStd-BoldSlanted /LetterGothicStd-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /MinionPro-Semibold /MinionPro-SemiboldIt /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9</s>
<s>/MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /MyriadPro-Black /MyriadPro-BlackIt /MyriadPro-Bold /MyriadPro-BoldIt /MyriadPro-It /MyriadPro-Light /MyriadPro-LightIt /MyriadPro-Regular /MyriadPro-Semibold /MyriadPro-SemiboldIt /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomDGR-Bold /NimbusRomDGR-BoldItal /NimbusRomDGR-Regu /NimbusRomDGR-ReguItal /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600</s>
<s>/MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440639063106360020063906440649002006270644063406270634062900200648064506460020062E06440627064400200631063306270626064400200627064406280631064A062F002006270644062506440643062A063106480646064A00200648064506460020062E064406270644002006350641062D0627062A0020062706440648064A0628061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043f043e043a0430043704320430043d04350020043d043000200435043a04400430043d0430002c00200435043b0435043a04420440043e043d043d04300020043f043e044904300020043800200418043d044204350440043d04350442002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e5c4f5e55663e793a3001901a8fc775355b5090ae4ef653d190014ee553ca901a8fc756e072797f5153d15e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc87a25e55986f793a3001901a904e96fb5b5090f54ef650b390014ee553ca57287db2969b7db28def4e0a767c5e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020007a006f006200720061007a006f007600e1006e00ed0020006e00610020006f006200720061007a006f007600630065002c00200070006f007300ed006c00e1006e00ed00200065002d006d00610069006c0065006d00200061002000700072006f00200069006e007400650072006e00650074002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c00200073006b00e60072006d007600690073006e0069006e0067002c00200065002d006d00610069006c0020006f006700200069006e007400650072006e00650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200064006900650020006600fc00720020006400690065002000420069006c006400730063006800690072006d0061006e007a0065006900670065002c00200045002d004d00610069006c0020006f006400650072002000640061007300200049006e007400650072006e00650074002000760065007200770065006e006400650074002000770065007200640065006e00200073006f006c006c0065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e00200065006e002000700061006e00740061006c006c0061002c00200063006f007200720065006f00200065006c006500630074007200f3006e00690063006f0020006500200049006e007400650072006e00650074002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e400740074006500690064002000730065006c006c0069007300740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002c0020006d0069007300200073006f006200690076006100640020006b00f500690067006500200070006100720065006d0069006e006900200065006b007200610061006e0069006c0020006b007500760061006d006900730065006b0073002c00200065002d0070006f0073007400690067006100200073006100610074006d006900730065006b00730020006a006100200049006e007400650072006e00650074006900730020006100760061006c00640061006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000640065007300740069006e00e90073002000e000200049006e007400650072006e00650074002c002000e0002000ea007400720065002000610066006600690063006800e90073002000e00020006c002700e9006300720061006e002000650074002000e0002000ea00740072006500200065006e0076006f007900e9007300200070006100720020006d006500730073006100670065007200690065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003b103c103bf03c503c303af03b103c303b7002003c303c403b703bd002003bf03b803cc03bd03b7002c002003b303b903b100200065002d006d00610069006c002c002003ba03b103b9002003b303b903b1002003c403bf0020039403b903b1002d03b403af03ba03c403c503bf002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05EA05E605D505D205EA002005DE05E105DA002C002005D305D505D005E8002005D005DC05E705D805E805D505E005D9002005D505D405D005D905E005D805E805E005D8002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV <FEFF005a00610020007300740076006100720061006e006a0065002000500044004600200064006f006b0075006d0065006e0061007400610020006e0061006a0070006f0067006f0064006e0069006a006900680020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f0161007400690020006900200049006e007400650072006e0065007400750020006b006f00720069007300740069007400650020006f0076006500200070006f0073007400610076006b0065002e00200020005300740076006f00720065006e0069002000500044004600200064006f006b0075006d0065006e007400690020006d006f006700750020007300650020006f00740076006f00720069007400690020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006b00610073006e0069006a0069006d0020007600650072007a0069006a0061006d0061002e> /HUN <FEFF00410020006b00e9007000650072006e00790151006e0020006d00650067006a0065006c0065006e00ed007400e9007300680065007a002c00200065002d006d00610069006c002000fc007a0065006e006500740065006b00620065006e002000e90073002000200049006e007400650072006e006500740065006e0020006800610073007a006e00e1006c00610074006e0061006b0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f9002000610064006100740074006900200070006500720020006c0061002000760069007300750061006c0069007a007a0061007a0069006f006e0065002000730075002000730063006800650072006d006f002c0020006c006100200070006f00730074006100200065006c0065007400740072006f006e0069006300610020006500200049006e007400650072006e00650074002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF753b97624e0a3067306e8868793a3001307e305f306f96fb5b5030e130fc30eb308430a430f330bf30fc30cd30c330c87d4c7531306790014fe13059308b305f3081306e002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b9069305730663044307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c306a308f305a300130d530a130a430eb30b530a430ba306f67005c0f9650306b306a308a307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020d654ba740020d45cc2dc002c0020c804c7900020ba54c77c002c0020c778d130b137c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b00790074006900200072006f006400790074006900200065006b00720061006e0065002c00200065006c002e002000700061016100740075006900200061007200200069006e007400650072006e0065007400750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f007400690020007201010064012b01610061006e0061006900200065006b00720101006e0101002c00200065002d00700061007300740061006d00200075006e00200069006e007400650072006e006500740061006d002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f007200200073006b006a00650072006d007600690073006e0069006e0067002c00200065002d0070006f007300740020006f006700200049006e007400650072006e006500740074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079015b0077006900650074006c0061006e006900610020006e006100200065006b00720061006e00690065002c0020007700790073007901420061006e0069006100200070006f0063007a0074010500200065006c0065006b00740072006f006e00690063007a006e01050020006f00720061007a00200064006c006100200069006e007400650072006e006500740075002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200065007800690062006900e700e3006f0020006e0061002000740065006c0061002c0020007000610072006100200065002d006d00610069006c007300200065002000700061007200610020006100200049006e007400650072006e00650074002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020006100660069015f006100720065006100200070006500200065006300720061006e002c0020007400720069006d0069007400650072006500610020007000720069006e00200065002d006d00610069006c0020015f0069002000700065006e00740072007500200049006e007400650072006e00650074002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f0020044d043a04400430043d043d043e0433043e0020043f0440043e0441043c043e044204400430002c0020043f0435044004350441044b043b043a04380020043f043e0020044d043b0435043a04420440043e043d043d043e04390020043f043e044704420435002004380020044004300437043c043504490435043d0438044f0020043200200418043d044204350440043d043504420435002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020007a006f006200720061007a006f00760061006e006900650020006e00610020006f006200720061007a006f0076006b0065002c00200070006f007300690065006c0061006e0069006500200065002d006d00610069006c006f006d002000610020006e006100200049006e007400650072006e00650074002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f01610074006f00200069006e00200069006e007400650072006e00650074002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e40020006e00e40079007400f60073007400e40020006c0075006b0065006d0069007300650065006e002c0020007300e40068006b00f60070006f0073007400690069006e0020006a006100200049006e007400650072006e0065007400690069006e0020007400610072006b006f006900740065007400740075006a0061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f6007200200061007400740020007600690073006100730020007000e500200073006b00e40072006d002c0020006900200065002d0070006f007300740020006f006300680020007000e500200049006e007400650072006e00650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF0045006b00720061006e002000fc0073007400fc0020006700f6007200fc006e00fc006d00fc002c00200065002d0070006f00730074006100200076006500200069006e007400650072006e006500740020006900e70069006e00200065006e00200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f0062006100740020007600650020004100630072006f006200610074002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a0456043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043f0435044004350433043b044f043404430020043700200435043a04400430043d044300200442043000200406043d044204350440043d043504420443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Bangla Natural Language InterfaceA Framework for Building a Natural Language Interface for BanglaYeasin Ar Rahman, Mahtabul Alam Sohan, Khalid Ibn Zinnah, Mohammed Moshiul Hoque Chittagong University of Engineering & Technology Chittagong-4349, Bangladesh nibir201188@gmail.com, mahtabul1993@gmail.com, khalidex@yahoo.com, moshiul_240@cuet.ac.bd Abstract— Mobile Computing Devices are enabling connection between people and the Internet, largest source of information in the world. However to properly utilize this knowledge in these devices most people need a Natural Language Interface. Siri, Google Now, Cortana are examples of such interfaces. Because Bangla is a low resource language building such interface is very difficult and time consuming. However due to the increasing numbers of smart phone and smart device users in the Bangla speaking regions, application developers are facing the need for such interfaces to provide web services effectively. This paper addresses this issue and gives an empirical framework on how to build a feasible Natural Language Interface in Bangla and similar low resource languages. Keywords— Bangla Natural Language Interface; Human Computer Interaction; Bangla Speech to Text; Bangla Language Processing; Artifical Intelligence. I. INTRODUCTION Language is used to communication. There are several methods for communicating. They are verbal, written and visual (sign language, body movement, nod, gesture etc.). Each of the form of communication is very unique that it is very difficult for computers to understand the meaning. However after the introduction of smart mobile devices such as smart phones, wearable devices such as smart watch, smart band etc. and Internet of things traditional user interface such as GUI (graphical user interface) and CLI (command line interface) are no longer a viable option for effective use of human time [1]. In many cases these devices don’t have traditional input-output devices such as keyboard, mouse or display. In order to give pleasant user experience Natural Language Interface (NLI) or generally Voice User Interface (VUI) are very practical and somewhat necessary approach in these devices. NLI generally has an artificial intelligence unit so it can successfully distinguish between different commands of the user and then confidently perform the task which the user intended. The process utilizes Automated Speech Recognition and Natural Language Understanding. Generally this kind of artificial intelligence program or service is called an Intelligent Personal Assistant (or simply IPA) which can perform tasks or services for a user [2]. In a NLI linguistic events such as verbs, phrases and clauses act as User Interface (UI) controls for searching, creating, selecting and modifying data in software applications. In interface design NLIs are sought after for their speed and ease of use, but most suffer the challenges to understanding wide varieties of ambiguous input [5]. Processing a language consists of some important steps. They are sequentially Morphological, Syntax and Semantic analysis. For a Natural Language Interface Part of Speech Tagging, Named Entity Recognition, Intent analysis are important fields for achieving desired result. Using machine learning methods the state of the art systems has achieved striking results [3]. However one of the key reason for these success were large amount of annotated data. For</s>
<s>most of the languages in the world such data is not available. Over 250 million people use Bangla as their medium of communication. Currently there are 60 million internet users in Bangladesh [23]. Most of these users use internet through mobile devices. Currently web application developers and mobile app developers do not have access to natural language processing technologies such automatic speech recognition, text to speech, natural language searching, optical character recognition etc. in Bangla. Since most of the general population in Bangla speaking regions are not proficient in English it is not possible to provide satisfactory service through mobile devices without these natural language services. On the other hand most of the applications and services currently available in this region cannot be fine-tuned for the regional preferences without these natural language services. Developing a Natural Language Interface is important for Bangla because it facilitates easier search for information, enables automation of home and industry, makes communication with robots possible, enables navigation better and easier, allows people with disabilities to easily communicate with people and devices. Most importantly it removes the language barrier to enter internet and thus allows the inclusion of the population in villages to take part in the digital world. Through a robust NLI it is possible to effectively provide digital services to rural and urban population. There is very little data available for Bangla Language Research. So it is very difficult to develop a Natural Language Interface. In this work we address this issue and provide a standardized guideline and an empirical framework for building a Bangla natural language interface. Although there is a lack of available data, we believe if we effectively use the data available in Bangla in the internet, it is possible to build a feasible system which can provide natural language services to application 978-1-5090-5627-9/17/$31.00 ©2017 IEEE International Conference on Electrical, Computer and Communication Engineering (ECCE), February 16-18, 2017, Cox’s Bazar, Bangladesh935developer for providing intelligent services to the Bangla speaking population. II. BACKGROUND There have been significant research in the field and subfields (e.g. Human Computer Interaction) of artificial intelligence to build a system that can interact with human at the same level of understanding of another human. But research suggest that it is quite difficult to formulate a methodology for simulating human behavior in machine as the knowledge of understanding human behavior is still at primitive level. Practical systems are designed to perform specific tasks in specific fields. The open agent architecture (OAA) [21] is one of the most prominent framework for building multimodal interface. It delegates most of the work to individual services and thus allows a clean design. It is domain independent framework. However it requires that such services are available. But in most of the low resource languages those services do not exist and has to be integrated into the system. RADAR [22] is another notable system that allows to maintain a calendar intuitively. It was a personal assistant type system. It is task specific framework. However it</s>
<s>paved the way modern personal assistant architecture. However in the domain of intelligent personal assistant the most influential project in terms of technology was the CALO project [11] which was funded by DARPA (Defense Advanced Research Projects Agency). One of the products of this project was PAL (Personal Assistant that Learns). It provided a general guideline for building useful NLI system. These are the key aspects of the system such as learning, data management, data acquisition, controllers and user Interface. Today’s most prominent Intelligent Personal Assistants for example Siri(Apple), Cortana(Microsoft), Watson(IBM), M(Facebook), S voice(Samsung), Google Now(Google) etc. all use similar concepts for their NLI platforms. But all these systems were made for English speaking users primarily. To our knowledge there has not been any significant work to build an NLI in Bangla. However most of the science for constructing such an interface in Bangla has been explored by the researchers. Here significant works for Bengali Speech Recognition and knowledge representation and reasoning is stated. In almost all the cases NLI utilizes a speech recognition engine. The first large vocabulary continuous speech recognition system was Sphinx-II [4] developed in the Carnegie Melon University. In most of the modern general-purpose speech recognition systems Hidden Markov Model is used. HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. However the state of the art technology in speech recognition is Deep Neural Network. Using very large amount of data researchers from Microsoft, Google, IBM, Baidu, Apple, Nuance etc. companies have reached near perfect accuracy [8]. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data [9]. In case of Bangla there exists no general purpose speech recognition service. There exist no online or offline engine to covert There have been several works on Bengali speech to text systems, almost all of the studies generally emphasized on developing new algorithms rather than the implementation of an Application or Service. Hasnat et al. [12] made a customized Hidden Markov Model (HMM) based scheme for pattern classification. They also integrated the stochastic model within the scheme for Bengali speech–to-text. They used the HTK framework to make their test system. Firoze et al. [13] developed a fuzzy logic based speech recognition system and proposed that fuzzy logic has to be the base for all linguistic ambiguity-related problems in Bengali. They empirically showed that fuzzy logic results in improved response for more ambiguous linguistic entities in Bengali speech. This study is the first attempt using cepstral analysis in the artificial neural network (ANN) to recognize Bengali speech. The only large vocabulary Bengali continuous speech recognition system to our knowledge is Shruti-II developed in IIT, Kharagpur for visually impaired person [14]. Speech to text (STT) engine outputs textual output in formats specified by the developers. It can be further processed by other application processes. Natural Language Understanding part of a</s>
<s>NLI makes use of Named Entity Recognition (NER), Parts of Speech Tagging (POS Tagger), Bangla Wordnet and sentence parsing. For parts of speech tagging Maximum Entropy (ME) approach was proposed by Ekbal Asif [15]. In 2007 Hasan et el. performed a comparative study on HMM, Unigram and Brill’s method and showed that brill’s methods gives better performance [16]. Named Entity Recognition is very difficult for low resource languages like Bangla. Asif Ekbal proposed a method using support vector machine for Bangla NER in 2008[17]. Cucerzan, Silviu, and David Yarowsky showed a method for language independent named entity recognition [18]. For NLI dependency parsing is quite useful. Das, Arjun, and Arabinda Shee Utpal Garain evaluated two different dependency parsers [19]. Sankar et el. used demand satisfaction approach for dependency parsing in Bangla [20]. Most of the previous work in Bangla has one key limitation that is almost all of them were tested on a small dataset which is not suitable for a NLI. III. METHODS A NLI is a complex system. So the internal organization is divided into several parts or modules. The proposed system has the following modules Automatic Speech Recognition Unit (ASRU), Intelligent Response Unit (IRU), Knowledge Base, Question-Answering and Search Engine, Control Systems API, Text to Speech based response system. The system architecture is shown in Fig. 1. Each module in the system is important for the system to work properly. However the internal working of individual module is separate from the architecture. So the each module can be changed and improved without interfering with other components. The approaches used in different modules to perform its task is given in the following sections. A. Automatic Speech Recognition Unit The primary user interface of our system is voice based, so there is a great emphasis on voice input. The unit works the following way the user gives a command or query to the device (computer/mobile). Then the system takes that voice data and decodes it using CMU Sphinx toolkit. CMU Sphinx [10] toolkit is a HMM based large vocabulary continuous speech recognition system that supports any language if the Acoustic 936Model and Language Model are provided. Here CMU Sphinx is used because the required acoustic model and language models are easier to train for this toolkit. Also there is readily available helping applications which are necessary to train the models. After decoding the speech signal CMU Sphinx gives a text output that is then sent to IRU. The IRU then uses various natural language processing techniques to evaluate the sentence to correctly identify the intent of the user. Fig.1 System Architecture B. Intelligent Response Unit Intelligent Response Unit (IRU) is the most important component of the proposed system. The responsibility of IRU is to transform human language into actionable data. It uses several natural language processes. These components are implemented using Apache OpenNLP [24] framework. Each process is a separate component inside the system. The architecture for IRU is shown in Fig. 2. Fig. 2 Intelligent Response Unit Architecture</s>
<s>A brief explanation about the components working procedure is given below: 1) Lexical Analysis and Stemming: Takes textual input from the previous units. Convert the input into tokens using lexical analyzer which stems individual words to roots word or converts sentence into word tokens. Each token is then passed to the next section for example, Original Form: “িবরানী কাথায় সবেচেয় ভাল?” Tokenized form: “িবরানী”, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” 2) Parts of Speech Tagging: The parts of speech(POS) tagger tags each token into a parts of speech tag. It is a very important step because all the steps afterwords greatly depends on correct tagging of POS. It uses statistical MaxEnt based learning model. OpenNLP is used to train the model. Here (1) is a simple form for calculating MaxEnt. ( | ) = (∑ ( , ))∑ ( ∑ ( , )) (1) Here the symbols represent their universal meaning. 3) Named Entity Recognition: The named entity recognizer will recognize all the named entities in the text. The named entities are the objects that the system will run a command on or search for an answer. It also uses MaxEnt model. For example, Tokenized form: “িবরানী”, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” Named Entity: : <Entity Food>“িবরানী”</Entity Food>, “ কাথায়”, “সবেচেয়”, “ভাল”, “?” 4) Sentence Dependency Parsing: The dependency parser creates a tree structure of the text that defines the relationship between different parts of the sentence. It is used for the command or query output generation. It requires a tree-bank. The dependency parsed tree for the sentence “কাজী নজ ল ইসলাম কাথায় জ হন কেরেছন?” has been given in Fig. 3. The figure is the output from the annotation application webanno[26]. Fig. 3 Example of dependency parsing 5) Keyword Based Intent Analysis: The intent analysis is based on keyword. Each keyword can be rephrased by other words given the word is very close to the word in Bangla wordnet. Here a sample of process is shown in Table I. TABLE I. INTENT ANALYSIS Intent Keyword Alternate Keywords Weather আবহাওয়া পিরেবশ, অব া Control ালাও অন কেরা An intent is dependent on the application service supported by the application developer. Intents can have multiple keywords for different processes. If two intent keyword collide then the named entities are used to resolve the collision. 6) Output Generation: After the intents has been recognized. The system creates a output string based on the 937parsed text and entities recognized. Here a sample is given in Table II. TABLE II. OUTPUT GENERATION Utterance Output Type: Command “এিস ট ােরচার ২৫ এ দাও” {intent : AC control command: Set Temp value : 25 degree place : this.room type : command} Type: Query “আজেক রােতর পিরেবশ কমন হেব?” {intent : weather day : this.today.date place : this.location time : “রাত” type : query} C. Knowledge Base Knowledge base is the physical database where all the information remains. The system will use Apache Solr [25] Based knowledge database, which is Open Source Full Text Search Engine Database. The</s>
<s>data sources are following 1) Wikipedia 2) Banglapedia 3) Newspaper, Blog and Bangla Website Crawl Data D. Question Answering and Search For Factoid question answering (QA) the system will use the general methodologies for question answering system. It uses TF/IDF [7] based algorithm for ranking the necessary documents. The weight of term i in document j can be found from the document term matrix using (2). We use a cut-off number of top 20 result to keep the calculation time minimum. wi,j = tfi,j X idfi (2) After finding the candidate passage. The QA systems task is following: 1) Question Classification: It analyzes the question using Webclopedia QA Typology [6]. All the factoid question follow a pattern. So using question classifier can understand the type of possible result. 2) Answer Type Pattern Extraction: After classifying the question the system consolidates candidate sentences. It uses the following independent methodologies [7] to rank the candidate sentences. a) Answer type match b) Pattern match c) Number of matched question keywords d) Keyword distance e) Novelty factor f) Apposition features g) Punctuation location h) Sequences of question terms If the confidence score is more than 70% for an answer that answer is shown. Otherwise top five answers and their associated document link is displayed. In case of open-domain question the system leverage it to the search engine. We use the above mentioned Apache Solr Search System. E. Control Systems APIs The Control System APIs are set of methods those are exposed in order to control a device or application programmatically. Each system has different set of control API. Any API can be logically combined to our proposed framework. The APIs takes the command output from the IRU and then performs a task specified in the command. Applications developers has to utilize the intents used in the previous steps in order to design the APIs. F. Text to Speech Confirmation After completing all the steps it is necessary to confirm the user that the task has been completed. The task can be accomplished by Text to Speech (TTS) engine. In the cases where the results are output of a query the results can be directly shown in the display. If no display device is present then it can be output by TTS engine. IV. EXPERIMENTS This section describes the process of data collection, data collection environment, tools used for collecting data and primary results. A. Data Collection For the purpose of robust speech to text conversion we tried to find out the possible utterance a user might say during an operation. As there has not been any previous attempt to make a NLI for Bangla, we constructed a Command Script for Bangla adapting similar utterances as used in other languages. The script has 250 utterances. The Domains covered in the script is given in Table III. We also found usually for most command and control utterance the vocabulary size is limited. However precision of this kind of command is more important because the commands are</s>
<s>in many cases similar and it is possible that one command might be confused with another one. The solution to this problem is to collect large amount of data to capture accurate voice properties of the users. The voices were taken from the users in the age range of 20-30 years. Data was taken mainly from the people who volunteered for the project. The data is roughly divided into 75% male and 25% female ratio. The occupation of the users are mostly students with some exception of service holders and businessmen. The education level of the volunteers vary from higher secondary to university. The voice was taken in the standard dialect of Bangla. It has been tried to minimize regional bias by taking voices from people of different districts. TABLE III. COMMAND AND SERVICE SCRIPT Domain Name Example Utterance Weather আজেকর তাপমা া কত? Food And Restaurant িবরানী কাথায় সবেচেয় ভাল? General Direction for Items (Stationary/Grocery) আেশপােশ বইখাতা কাথায় পাওয়া যােব? Directions for Landmarks বনানীেত হাসপাতাল কাথায়? Travel Related কাছাকািছ দখার মত িক আেছ? 938Price Related (General/Stock Market) গতকাল [মুরগীর] দাম কত িছল? General Query িবরািনর রিসপী বর কর। General Knowledge Question কাজী নজ ল ইসলাম কাথায় জ হন কেরেছন? Note and Alarm সকাল ৭ টায় এলাম সট কর। Communication বাসায় ফান দাও Conversion and Calculation ৪৫ ডলার সমান কত টাকা? Sports Query বাংলােদেশর ার কত? Transportation Timetable আজেক পারাবত ল কয়টায় ছাড়েব? Device Control এিস ট ােরচার ২৫ এ দাও Maps and Fair Calculation ধানমি ২৭ থেক ধানমি ১০ এর ির া ভাড়া কত? Numbers এক হাজার ইশ পাচ Program Control আমার কল রকড দখাও On the other hand in case of queries it is not practically possible to guess what a user might say. To address this issue we created another script which constitutes of various text collected from newspapers, television, novel and other domains which contains common words in everyday use. A partial list of data Sources in our query script is given in Table IV. The voice data has been collected under the following conditions: 1) Microphone: A Blue Yeti Professiona microphone is used to collect the data 2) Software: To collect the data Audacity software is used. 3) Environment: The data has been tried to collect in a natural environment in order to take the noise in the natural environment into consideration. TABLE IV. DOMAINS FOR GENERAL QUERY SCRIPT Source Name Percentage of total utterance (Rounded) Wikipedia 20% Newspaper Editorial (Prothom-Alo) 15% Blogs (Somewherein Blog, Bdnews24 Blog) 20% Websites (techtunes, techtweets) 10% News (Prothom-Alo, Bdnews24) 10% Novels (Dorojar Opashe, Parapar, Meku Kahini) 10% History (Ekattorer Dinguli) Famous Personalities (Humayun Ahmed) Famous Places (Cox’s Bazar, Jaflong) For the Knowledge database all the articles from Bangla Wikipedia has been collected and hand cleaned. It contains 240K lines, which to our knowledge is one of the largest Bangla Text Corpus. Data from popular newspapers, blogs and websites are being collected. As most of the open source nlp software do not directly support Bangla in Unicode form. We needed</s>
<s>a transliteration tool to easily port Bangla support in these software. Open-source transliteration tools for Bangla are not suitable for large amount of text as they are very slow. We developed a Fork-Join Framework based parallel processing transliteration application which speed-up our process several times. B. Results The system development is in the preliminary stage. Currently the data collection phase is going on. Several Hours of Voice data has been collected, it is being tested to see if the quality matches the expectation. The data collection is on process. Here the result found for first 2 hours of command and service script is given. It has been tested for two different models a semi-continuous model and a continuous model. Here two parameters has been used to test the data. The word error rate (WER) is the number of wrongly recognized words in every 100 words recognized, it measures accuracy of the system. On the other hand response time (RT) measures how much time the system takes to recognize a single sentence. It has been found that semi-continuous models are faster in detecting utterance and continuous models are more accurate. Here accuracy and rounded response time is given in Table V. After the data collection is complete this accuracy can change. TABLE V. WORD ERROR RATE AND RESPONSE TIME Model WER RT Continuous (800 words) 6.778 0.8s Semi-Continuous (800 words) 8.598 0.5s V. CONCLUSION The primary contribution of this work is formulating a feasible framework for low resource languages like Bangla. In this work previously done research has been proposed to implement a particular feature of the system and in cases where no such work is available it has been suggested to adapt the existing systems into Bangla. The Secondary contribution of this work is the construction and design of large datasets for different natural language processes. Our planned Bangla Speech Corpus is the largest speech corpus yet designed for Bangla as far as we know. The language model is also very large to easily use in any large scale production environment. In continuation of this work the experimental result and accuracy data will be published. The performance analysis and hardware requirements for specific tasks will be mentioned. Further research is needed for a machine learning based Question Answering engine. Also the Deep Learning can be used for speech recognition engine. REFERENCES [1] P. Maes, “Agents that reduce work and information overload,” Communications of the ACM, vol. 37, no. 7, pp. 30–40, Jul. 1994. [2] N. R. Jennings and M. Wooldridge, “Applications of Intelligent Agents,” Agent Technology, pp. 3–28, 1998. 939[3] S. J. Russell and P. Norvig, “Artificial intelligence: A modern approach,” 3rd ed., Prentice Hall, pp. 25-28 2009. [4] X. Huang, F. Alleva, H.-W. Hon, M.-Y. Hwang, K.-F. Lee, and R. Rosenfeld, “The SPHINX-II speech recognition system: an overview,” Computer Speech & Language, vol. 7, no. 2, pp. 137–148, 1993. [5] P. R. Cohen, “The role of natural language in a multimodal interface,” Proceedings of the 5th annual ACM symposium</s>
<s>on User interface software and technology - UIST '92, 1992. [6] D. Ravichandran and E. Hovy. "Learning surface text patterns for a question answering system." In Proceedings of the 40th annual meeting on association for computational linguistics, Association for Computational Linguistics, pp. 41-47, 2002. [7] D. Jurafsky and J. H. Martin, "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition," 2nd ed. United States: Pearson Prentice Hall, Ch. 23, pp. 5 -23 , 2008. [8] G. Hinton et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of Four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, Nov. 2012. [9] L. Deng and D. Yu, Deep learning: Methods and applications. Grand Rapids, MI, United States: now publishers, pp. 198–213, 2014. [10] P. Lamere et al., "Design of the CMU sphinx-4 decoder." In INTERSPEECH. 2003. [11] P. M. Berry, K. Myers, T. E. Uribe, and N. Y. Smith. "Constraint solving experience with the calo project" In Changes’ 05 International Workshop on Constraint Solving under Change and Uncertainty. 2005. [12] M. A. Hasnat, J. Mowla, and M. Khan, “Isolated and continuous bangla speech recognition: implementation, performance and application perspective,” in Center for research on Bangla language processing (CRBLP), 2007. [13] A. Firoze, M. S. Arifin, and R. M. Rahman, "Bangla user Adaptive word speech recognition," International Journal of Fuzzy System Applications, vol. 3, no. 3, pp. 1–36, 2013. [14] S. Mandal, B. Das, and P. Mitra. "Shruti-II: A vernacular speech recognition system in Bengali and an application for visually impaired community." Students' Technology Symposium, IEEE, 2010. [15] A. Ekbal, R. Haque, and S. Bandyopadhyay. "Maximum Entropy Based Bengali Part of Speech Tagging." A. Gelbukh (Ed.), Advances in Natural Language Processing and Applications, Research in Computing Science (RCS) Journal 33: 67-78, 2008. [16] F. M. Hasan, N. UzZaman, and M. Khan. "Comparison of different POS Tagging Techniques (N-Gram, HMM and Brill’s tagger) for Bangla." Advances and Innovations in Systems, Computing Sciences and Software Engineering. Springer Netherlands, 121-126, 2007. [17] A. Ekbal, and S. Bandyopadhyay. "Bengali Named Entity Recognition Using Support Vector Machine." IJCNLP. 2008. [18] S. Cucerzan, and D. Yarowsky. "Language independent named entity recognition combining morphological and contextual evidence." Proceedings of the 1999 Joint SIGDAT Conference on EMNLP and VLC. 1999. [19] A. Das, and A. S. U. Garain. "Evaluation of two Bengali dependency parsers." 24th International Conference on Computational Linguistics. 2012. [20] S. De, A. Dhar, and U. Garain. "Structure Simplification and Demand Satisfaction Approach to Dependency Parsing for Bangla." Proc. of 6th Int. Conf. on Natural Language Processing (ICON) tool contest: Indian Language Dependency Parsing. 2009. [21] A. Cheyer, and D. Martin. "The open agent architecture." Autonomous Agents and Multi-Agent Systems 4, no. 1: 143-148, 2001. [22] P. J. Modi, M. Veloso, S. F. Smith, and J. Oh. "Cmradar: A personal assistant agent for calendar management." In Agent-Oriented Information Systems II, pp. 169-181. Springer Berlin Heidelberg, 2005. [23] B. T. R. Commission, "Internet subscribers in Bangladesh</s>
<s>July, 2016," 2016. [Online]. Available: http://www.btrc.gov.bd/content/internet-subscribers-bangladesh-july-2016. Accessed: Sep. 1, 2016. [24] T. A. S. Foundation, "Apache OpenNLP," 2010. [Online]. Available: https://opennlp.apache.org/. Accessed: Sep. 6, 2016. [25] T. A. S. Foundation, "Apache Solr," 2016. [Online]. Available: http://lucene.apache.org/solr/. Accessed: Sep. 6, 2016. [26] R. E. d. Castilho, C. Biemann, I. Gurevych and S.M. Yimam, “WebAnno: a flexible, web-based annotation tool for CLARIN”. Proceedings of the CLARIN Annual Conference (CAC), 2014. 940 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste</s>
<s>impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/283882615Design and implementation of an efficient enconverter for bangla languageArticle · August 2015CITATIONSREADS4195 authors, including:Some of the authors of this publication are also working on these related projects:COVID-19 and psychiatric health View projectCOVID-19 and Public Health View projectM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEDr. Aloke Kumar SahaUniversity of Asia Pacific27 PUBLICATIONS 54 CITATIONS SEE PROFILEMd. Akhtaruzzaman AdnanThe University of Asia Pacific12 PUBLICATIONS 146 CITATIONS SEE PROFILEMolla Rashied HusseinUniversity of Asia Pacific14 PUBLICATIONS 21 CITATIONS SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 26 September 2017.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/283882615_Design_and_implementation_of_an_efficient_enconverter_for_bangla_language?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/283882615_Design_and_implementation_of_an_efficient_enconverter_for_bangla_language?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-psychiatric-health?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/COVID-19-and-Public-Health?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Dr_Aloke_Saha?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/The_University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Akhtaruzzaman_Adnan?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Asia_Pacific?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Molla_Rashied_Hussein?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-acb355682fa3445d3c6df11a3408ae65-XXX&enrichSource=Y292ZXJQYWdlOzI4Mzg4MjYxNTtBUzo1NDI3MDM2NDc1NTE0ODhAMTUwNjQwMjA0MDgyMQ%3D%3D&el=1_x_10&_esc=publicationCoverPdf VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6543DESIGN AND IMPLEMENTATION OF AN EFFICIENT ENCONVERTER FOR BANGLA LANGUAGE M. F. Mridha1, Aloke Kumar Saha1, Md. Akhtaruzzaman Adnan1, Molla Rashied Hussein1 and Jugal Krishna Das2 1Department of Computer Science and Enginerring, University of Asia Pacific, Dhaka, Banglaesh 2Department of Computer Science and Engineering, Jahangirnagar University, Savar, Dhaka, Banglaesh E-Mail: mdfirozm@yahoo.com ABSTRACT In this paper, a distinctive approach of Machine Translation (MT) from Bangla language to Universal Networking Language (UNL) is proffered. This approach corroborates analyzing Bangla sentences more precisely. The analysis churns out a semantic net like structure expressed by means of UNL. The UNL system comprises two major components, namely, EnConverter (used for converting the text from a Native language to UNL) and DeConverter (used for converting the text from UNL to a Native language). This paper discusses the framework for designing EnConverter for Bangla language with a particular attention on generating UNL attributes and relations from Bangla Sentence input. The structural constitution of Bangla EnConverter, algorithm for understanding the Bangla sentence input and resolution of UNL relations and attributes are also conferred in this paper. The paper highlights the EnConversion analyzing rules for the EnConverter and indicates its usage in generating UNL expressions. This paper also covers the results of implementing Bangla EnConverter and compares these with the system available in a Language Server located in Russia. Keywords: en-converter, machine translation, knowledge base, natural language parsing, universal networking language. INTRODUCTION According to a story narrated in the "Book of Genesis of the Tanakh" (Hebrew Bible), everyone on Earth used to speak the same language. People there learned to make bricks and build a city with a skyscraping tower. Purpose of that skyscraper is to stay in a single building and not to be scattered over the world. Eventually, they developed diverse languages over several eras and got themselves scattered over the world, as they failed to comprehend each other’s language as well as motives. Natural Language Processing (NLP) has a potential to unite the Universe again, as per the aforementioned story, but not by the same language, rather by constructing common platform for all existing languages. UNL has been used by researchers as an</s>
<s>Interlingua approach for NLP. The World Wide Web (WWW) today has to face the complexity of dealing with multilingualism. People speak different languages and the number of natural languages along with their dialects is estimated to be close to 4000. The Universal Networking Language [1,2,3] has been introduced as a digital meta language for describing, summarizing, refining, storing and disseminating information in a machine independent and human-language-neutral form. A good number of societies over the world are lagging behind in this age of Information Technology just because of the language barrier. There is a great need to translate digital contents which include but not limited to Websites, Blogs, Online News Portals, E-books, E-Journals, E-mails into the native language for overcoming that language barrier. This paper focuses on one such technology and includes the work carried out in this direction for Bangla Language. UNL System has EnConverter and DeConverter as two important components. The EnConverter converts source language sentences into UNL expressions [4]. The DeConverter converts UNL expressions to target language sentences. With the development of EnConverter for Bangla language, the Bangla text is converted to UNL expressions. It has a potential to translate Bangla language text to any language, if that language has its own DeConverter, which can convert UNL expressions generated by Bangla EnConverter i n t o that destined language. This will certainly help to develop a multilingual machine translation system for Bangla Language. The organization of this paper is as follow: In Section 2, we describe the related works, Section 3 has the short description about UNL format for representation information, Section 4 describes design of Bangla EnConverter. Finally, Section 5 draws conclusions with some remarks on future works. RELATED WORKS In order to design a multilingual machine translation system for Bangla Language, interlingua approach is the best match as it requires only n interlingua transfer modules for n languages [5]. A transfer module of each language requires only two components: one for converti n g from source language to Interlingua and other for c on v e r t i n g Interlingua to t h e target language. We have used Universal Networking Language (UNL) as the Interlingua for this task, as the UNL representation has the right level of expressive power and granularity. UNL has 46 semantic relations and 86 attributes to express the semantic content of a sentence [5,6]. UNL has been developed and is managed by the Universal Networking Digital Language (UNDL) foundation, an independent NGO founded in 2001 and based in Geneva, Switzerland, the extension of an initial VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6544project launched by the Institute of Advanced Studies of the United Nations University, Tokyo, Japan in 1996 [7,8]. For converting Bangla sentence to UNL expressions firstly, we have gone through Universal Networking Language (UNL) [10,11,12,13] where we have learnt about UNL expression, Relations, Attributes, Universal Words, UNL Knowledge Base,</s>
<s>Knowledge Representation in UNL, Logical Expression in UNL, UNL systems and specifications of EnConverter. All these are key factors for preparing Bangla word dictionary, enconversion and deconversion rules in order to convert a Bangla language sentence to UNL expressions. Secondly, we have rigorously gone through the Bangla grammar [13,9], Morphological Analysis [14,15,16,17], construction of Bangla sentence [9] based on semantic structure. Using above references we extract ideas about Bangla grammar for morphological and semantic analysis in order to prepare Bangla word dictionary [18,19], morphological rules and enconversion rules in the format of UNL provided by the UNL center of the UNDL Foundation. UNL format for representation of information We presume a UNL representation consists of UNL relations, UNL attributes and Universal Words (UWs). UWs are represented by their English equivalents. These words are listed in the Universal Word Lexicon of UNL knowledge base [1]. Relations are the building blocks of UNL sentences. The relations between the words are drawn from a set of predefined relations [3]. The attribute labels are attached with universal words to provide additional information like tense, number etc. For example, “েস sুেল যায়” in English “se school e jai” can be represented into UNL expression as: {unl} agt(go(icl>move>do,plt>place,agt>thing):0B.@entry.@present,he(icl>person):00) plt(go(icl>move>do,plt>place,agt>thing):0B.@entry.@present,school(icl>building>thing,equ>educational_institute):03) {/unl} We can here note that agt is the UNL relation which indicates “a thing which initiates an action”; obj is another UNL relation which indicates “a thing in focus which is directly affected by an event”; @entry and @present are UNL attributes which indicate the main verb and tense information; and @sg is UNL attribute which indicates the number information. Proposed algorithm descriptions The Bangla EnConverter processes the given input sentence from left to right. It uses two types of windows [11], namely, analysis window and condition window in the processing. The currently focused analysis windows are circumscribed by condition windows as shown in Figure-1. Here, ‘A’ indicates an analysis window, ‘C’ indicates a condition window, and ‘ni’ indicates an analysis node. Bangla EnConverter architecture The architecture of Bangla EnConverter can be divided into six phases. It consists of the tasks of processing of input Bangla sentence by Bangla parser, creation of linked list of nodes on the basis of output of parser, extraction of UWs and generation of UNL expression for the input sentence. The phases in proposed Bangla EnConverter are tokenize, linked list creation, Universal Word lookup, Case marker lookup, Unknown word handling and UNL creation phase. Figure-1. A schematic of EnConverter. Tokenize phase Bangla EnConverter uses Bangla parser for tokenize the input sentence. For parsing an input Bangla sentence to produce the intermediate outputs of tokenizer, morph analyzer, part-of-speech tagger, person, number and Bivokti computation. Linked list creation phase In this phase, Bangla EnConverter constructs a linked list of nodes. This linked list is constructed on the basis of information generated by the Bangla parser, Bangla- UW dictionary and root word-modifier table. Each root word of the token and verb modifiers of the main verb act as the candidates for the node. For</s>
<s>each root word, the words obtained by combining it with root word of next consecutive tokens are searched in Bangla-UW dictionary and root word-modifier table, so that the largest token can be formed on the basis of root words stored in the Bangla- UW dictionary. After receiving the If a token formed by the concatenation of consecutive root words is found as a single entry in Bangla-UW dictionary or in root word-modifier table, then that group of words is considered as a single token and stored as a node in the linked list, otherwise, each root word of the token is considered as a single token and stored as a node in the linked list. A node in the linked list has Bangla root word attribute, Universal Word attribute, Part-of- Speech (POS) information attribute, and a list of lexical and semantic attributes. Universal word lookup phase In this phase, Bangla-UW dictionary is used for mapping of Bangla root word of each node to Universal VOL. 10, NO. 15, AUGUST 2015 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2015 Asian Research Publishing Network (ARPN). All rights reserved.www.arpnjournals.com 6545Words and to retrieve its lexical semantic information. Exact UW is extracted from the dictionary on the basis of node’s Bangla root word attribute and its grammatical category. Since, Bangla-UW dictionary may contain more than one entry for a given Bangla word, the searching process retrieves the UW that matches with the node’s Bangla word and its grammatical category. For example, Bangla word, েখল khel ‘play’ has two entries in Bangla-UW dictionary, one as a noun and other as a verb. It selects only that entry which matches with the grammatical category of the node given by the Bangla parser. If a node is marked as unknown in the first phase of parsing, then node’s Bangla word attribute is searched in dictionary with its grammatical category as ‘null’. In case of multiple entries of that word, the system returns the UW of first entry and thus the unknown word becomes known during this phase. After extracting the UW, the node’s UW attribute is updated and linked list of lexical and semantic attributes is extended to append the UW dictionary attributes with the attributes generated by the parser. Case marker lookup phase If Bangla root word attribute of a node is not found in the Bangla-UW dictionary, then it may be a case marker or function word of the language having no corresponding UW. In such a case, node’s Bangla word attribute is searched in the case marker lookup file. If a word is found then the information about the case marker is added in the linked list of lexical and semantic attributes of the node and its UW is set to ‘null’ (because a case marker has no corresponding UW). This information plays an important role in resolving UNL relations in UNL generation phase. Unknown word handling phase If an unknown word is resolved in Universal Word lookup phase, then corresponding node</s>
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
5
Edit dataset card