text
stringlengths 41
31.4k
|
---|
<s>that we have proposed. Fig. 1 shows the proposed algorithm for semantic check from Bangla text. For example “মানষু ভাত খায়” (man eats rice). Here “মানষু” (man) is subject and “খায়” (eat) is verb. Now we check the relationship from VT (Table 3) and find there is a True relation between subject (man) and rice (verb) and indicate OV (Table 4) to S11. As we see ভাত (rice) is member of S11 and hence the sentence is semantically correct. Similarly “গর ুভাত খায়” (“Cow eats rice”) will be semantically incorrect sentence because OV relation is false. So any sentence displaying relationship which is illogical or irrational will indicate semantically incorrect and therefore will not be acceptable by the framework that we have proposed. Fig. 1 shows the proposed algorithm for semantic check from Bangla text. III. RE L A T E D WO R K S There have been lots of research on semantic analysis from text of different language[14 [-] 20 .[But the research of semantic analysis for Bangla language is very few in the literature]12][14][15 .[Soma Paul [14][15] describes an analysis of the unification two verb Bangla sentences (V1 and V2) by using semantic principle of compounding Based on HPSG structure[5][21]; The semantic content of a V2 structure-shares with the content of the V1 that selects the V2 in which the first member (V1) chooses between the conjunctive participial form and the infinitive form and the second member (V2) bears the inflection. Both the member verbs are semantically contentful. [12] present a methodology to extract semantic role labels of Bengali nouns using 5W. The 5W task seeks to extract the semantic information of nouns in a natural language sentence by distilling it into the answers to the 5W questions: Who, What, When, Where and Why. Beth Levin [16][17 [dicovers the behaviour of a verb ,particularly with respect to the expression and interpretation of its arguments, is to a large extent determined by its meaning and thus verb behabiour can be used to probe for linguistically relevant pertinent aspects of verb meanings. Hence Levin classifies over 3,000 English verbs according to shared meaning and behavior into different categories. Massachusetts Institute of Technology published a survey [18] on verb classes and alternations in Bangla, German, English, and Korean language for the purpose of investigating relationship between the semantic and syntactic properties of verbs based on cross-linguistic equivalents of Levin's classes[16]. A modified implemetation of Levin’s theory is implemented in [19 [for clustering German Verbs which describes and evaluate the application of a spectral clustering technique to the unsupervised clustering of German verbs] .20 [ Detects the semantic errors in Arabic texts using distributed architecture, namely, a Multi Agent System (MAS) .In this paper we have investigated the properties of noun and verb and their relation with valid objects on the basis of universal attributes of different animal, species, and objects like chair, table for simple Bangla Text. TABLE V. VERB CATEGORIZATION Sample inputs Sentences Results রিহম sুেল যায়। েস সাiেকলচালায়।</s> |
<s>সাiেকল আকােশ uেড়। কিরম ভাত খায়। েস পড়াশনুা কের। তারা বাসায় থােক। তারা িনয়িমত নামাজ পেড় না। রিহম o কিরম ভাi ভাi। Simple sentence = 06 & Others=02 Error=1 Correct=05 Not detected=02 IV. EXPERIMENTAL RESULTS Bangla is a complicated language and has a complex structural grammar which we have faced during testing our methodology. As we have developed a methodology to detect the semantic error of simple Bangla text, there are no such contents or corpora which are only in simple format of Bengali grammar. Also it is difficult to find any standard corpus that has some semantic error on the text. For this reason, we have taken different sample testing contents which were built by expert people and individuals to detect the possible semantic errors. Table shows a sample experimental analysis for semantic testing. There are different types of sentences in which some are simple structured of Bangla grammar. Some are different types. As we consider only SOV format, the paragraph contains 6 sentences of this form. 2 sentences of this passage are not matched with this structure, so these sentences cannot be detected by this methodology. By the analysis of this passage, 6 sentences are candidate for semantic error checking and out of these 6 sentences 1 sentence has semantic error (“সাiেকল আকােশ uেড়।“) and rest of the 5 sentences are semantically correct. check-semantic(sentence) begin //Assumed the table VT[i,j]is created if there is a //valid relation between subject si and verb vj //Split the sentence according to subject as s, verb as v //and object as o; sSet{}:=subject set, vSet():=verb set and oSet():=object set; if (s � sSet and v � vSet and o � oSet) then begin if (VT[s,v]=T) then return set s if (object o � S) return Correct else return Incorrect else return Incorrect; end end. Figure 1. Algorithm for semantic check of simple Bangla sentences 17th Int'l Conf. on Computer and Information Technology, 22-23 December 2014, Daffodil International University, Dhaka, Bangladesh 978-1-4799-6288-4/14/$31.00 ©2014 IEEE 298V. CONCLUSION We have presented a methodology to detect the semantic error from simple Bangla sentences. We have categorized the nouns and verbs for Bangla sentences. Although the methodology is for simple sentences of the form SOV but the categorization of nouns and verbs can be used for other forms of Bangla sentences such as complex and compound sentences and even for the multiple verb sentences. It is important and necessary to complete the validation table and object verb relation table for all the verbs and nouns of Bangla language. The performance of the proposed technique greatly depends on this. We believe the proposed algorithm can easily be extended for complex and compound sentences for semantic error detection. REFERENCES [1] Bendor E. Sag A. and Wawsow T., “Syntactic Theory: A Formal Introduction”, CSLI Publications, Stanford, CA, 1999. [2] Wechsler. S, “The Semantic Basis of Argument Structure”, CSLI Publications, Stanford, CA, 1995. [3] K. M. Azharul Hasan, Al-Mahmud, Amit Mondal, Amit Saha, "Recognizing Bangla Grammar using Predictive Parser", International Journal of</s> |
<s>Computer Science & Information Technology , 3(6), pp. 61-73, 2011. [4] K.M.A Hasan, A.Mondal, A.Saha "A context free grammar and its predictive parser for bangla grammar recognition" 13th International Conference Computer and Information Technology (ICCIT), pp. 87-91, 2010. [5] Md. Asfaqul Islam, K. M. Azharul Hasan, Md. Mizanur Rahman, “Basic HPSG Structure for Bangla Grammar”, Proceedings of the 15th ICCIT, pp. 185-189, 2012. [6] K. M. Azharul Hasan, Md Sajidul Islam, G. M. Mashrur-E-Elahi, Mohammad Navid Izhar, “Sentiment Recognition from Bangla Text”, Technical Challenges and Design Issues in Bangla Language Processing, 2013 . [7] Das A., and Bandyopadhyay S., “Phrase-level polarity identification for Bengali”, International Journal of Computational Linguistics and Applications,1(2),169–181, 2010. [8] Mohammed Nazrul Islam, Mohammad Ataul Karim, “Bangla Character Recognition Using Optical Joint Transform Correlation”, Technical Challenges and Design Issues in Bangla Language Processing, 2013. [9] Shah Atiqur Rahman, Kazi Shahed Mahmud, Banani Roy, K. M. Azharul Hasan, “English to Bengali Translation Using A New Natural Language Processing (NLP) Algorithm” Proceedings of the ICCIT 2003. [10] K. M. Azharul Hasan, Muhammad Hozaifa, Sanjoy Dutta, Rafsan Zani Rabbi, “A Framework for Bangla Text to Speech Synthesis”, Proceedings of the 16th ICCIT , pp. 60-64, 2013. [11] Md. Hanif Seddiqui, Muhammad Anwarul Azim, Mohammad Shahidur Rahman, M. Zafar Iqbal, “Algorithmic Approach to Synthesize Voice from Bangla Text”, Proceedings of the 5th ICCIT, pp. 233-236, 2002. [12] Amitava Das, Aniruddha Ghosh and Sivaji Bandyopadhyay, “Semantic role labeling for Bengali using 5Ws”, International Conference on Natural Language Processing and Knowledge Engineering, NLP-KE, pp 1-8, 2010. [13] Frey, T., Gelhausen, M., and Saake, “Categorization of Concerns – A Categorical Program Comprehension Model” In Proceedings of the Workshop on Evaluation and Usability of Programming Languages and Tools at the ACM Onward and SPLASH Conferences, pp. 73-82 2011. [14] Soma Paul, “Composition of Compound Verbs in Bangla”, Proceedings of the workshop on Multi-Verb constructions Trondheim, Summer School, 2003 . [15] Soma Paul, Dorothee Beermann, and Lars Hellan. "Composition of compound verbs in Bangla." Multi-Verb constructions, 2003. [16] Beth Levin, “English Verb Classes and Alternations: A preliminary investigation” The University of Shikago Press, Shikago, London, 1993. [17] Levin, B. and R and Rappaport H. M., “Unaccusativity, at the syntax-lexical semantics interface”, Cambridge, Mass.: MIT Press, 1995. [18] Douglas A., Jones, Robert C., Berwick, Franklin Cho, Zeeshan Khan, Karen T. Kohl, Naoyuki Nomura, Anand Radhakrishnan, Ulrich Sauerland,and Brian Ulicny, “ Technical Report on Verb Classes and Alternations in Bangla, German, English, and Korea”, Massachusetts Institute of Technology Cambridge, MA, USA, 1993. [19] Chris Brew and Sabine Schulte Walde “Spectral Clustering for German Verbs”, Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, pp. 117-124, 2002. [20] Chiraz Ben, Othmane Zribi and Mohamed Ben Ahmed, “ Detection of semantic errors in Arabic text”, Journal of Artificial Intelligence, 195, pp. 249-264, 2013. [21] A. Copestake, “Implementing Typed Feature Structure Grammars”, CSLI Publications, Stanford, 2002.17th Int'l Conf. on Computer and Information Technology, 22-23 December 2014, Daffodil International University, Dhaka, Bangladesh 978-1-4799-6288-4/14/$31.00 ©2014 IEEE 299</s> |
<s>/ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSansMM /AdobeSerifMM /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellGothicStd-Black /BellGothicStd-Bold /BellGothicStd-Light /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold</s> |
<s>/ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EuroSig /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KozGoProVI-Medium /KozMinProVI-Regular /KristenITC-Regular /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LetterGothicStd /LetterGothicStd-Bold /LetterGothicStd-BoldSlanted /LetterGothicStd-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /MinionPro-Semibold /MinionPro-SemiboldIt /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /MyriadPro-Black /MyriadPro-BlackIt /MyriadPro-Bold /MyriadPro-BoldIt /MyriadPro-It /MyriadPro-Light /MyriadPro-LightIt /MyriadPro-Regular /MyriadPro-Semibold /MyriadPro-SemiboldIt /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomDGR-Bold /NimbusRomDGR-BoldItal /NimbusRomDGR-Regu /NimbusRomDGR-ReguItal</s> |
<s>/NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440639063106360020063906440649002006270644063406270634062900200648064506460020062E06440627064400200631063306270626064400200627064406280631064A062F002006270644062506440643062A063106480646064A00200648064506460020062E064406270644002006350641062D0627062A0020062706440648064A0628061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043f043e043a0430043704320430043d04350020043d043000200435043a04400430043d0430002c00200435043b0435043a04420440043e043d043d04300020043f043e044904300020043800200418043d044204350440043d04350442002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e5c4f5e55663e793a3001901a8fc775355b5090ae4ef653d190014ee553ca901a8fc756e072797f5153d15e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc87a25e55986f793a3001901a904e96fb5b5090f54ef650b390014ee553ca57287db2969b7db28def4e0a767c5e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020007a006f006200720061007a006f007600e1006e00ed0020006e00610020006f006200720061007a006f007600630065002c00200070006f007300ed006c00e1006e00ed00200065002d006d00610069006c0065006d00200061002000700072006f00200069006e007400650072006e00650074002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c00200073006b00e60072006d007600690073006e0069006e0067002c00200065002d006d00610069006c0020006f006700200069006e007400650072006e00650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200064006900650020006600fc00720020006400690065002000420069006c006400730063006800690072006d0061006e007a0065006900670065002c00200045002d004d00610069006c0020006f006400650072002000640061007300200049006e007400650072006e00650074002000760065007200770065006e006400650074002000770065007200640065006e00200073006f006c006c0065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e00200065006e002000700061006e00740061006c006c0061002c00200063006f007200720065006f00200065006c006500630074007200f3006e00690063006f0020006500200049006e007400650072006e00650074002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e400740074006500690064002000730065006c006c0069007300740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002c0020006d0069007300200073006f006200690076006100640020006b00f500690067006500200070006100720065006d0069006e006900200065006b007200610061006e0069006c0020006b007500760061006d006900730065006b0073002c00200065002d0070006f0073007400690067006100200073006100610074006d006900730065006b00730020006a006100200049006e007400650072006e00650074006900730020006100760061006c00640061006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e></s> |
<s>/FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000640065007300740069006e00e90073002000e000200049006e007400650072006e00650074002c002000e0002000ea007400720065002000610066006600690063006800e90073002000e00020006c002700e9006300720061006e002000650074002000e0002000ea00740072006500200065006e0076006f007900e9007300200070006100720020006d006500730073006100670065007200690065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003b103c103bf03c503c303af03b103c303b7002003c303c403b703bd002003bf03b803cc03bd03b7002c002003b303b903b100200065002d006d00610069006c002c002003ba03b103b9002003b303b903b1002003c403bf0020039403b903b1002d03b403af03ba03c403c503bf002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05EA05E605D505D205EA002005DE05E105DA002C002005D305D505D005E8002005D005DC05E705D805E805D505E005D9002005D505D405D005D905E005D805E805E005D8002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV <FEFF005a00610020007300740076006100720061006e006a0065002000500044004600200064006f006b0075006d0065006e0061007400610020006e0061006a0070006f0067006f0064006e0069006a006900680020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f0161007400690020006900200049006e007400650072006e0065007400750020006b006f00720069007300740069007400650020006f0076006500200070006f0073007400610076006b0065002e00200020005300740076006f00720065006e0069002000500044004600200064006f006b0075006d0065006e007400690020006d006f006700750020007300650020006f00740076006f00720069007400690020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006b00610073006e0069006a0069006d0020007600650072007a0069006a0061006d0061002e> /HUN <FEFF00410020006b00e9007000650072006e00790151006e0020006d00650067006a0065006c0065006e00ed007400e9007300680065007a002c00200065002d006d00610069006c002000fc007a0065006e006500740065006b00620065006e002000e90073002000200049006e007400650072006e006500740065006e0020006800610073007a006e00e1006c00610074006e0061006b0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f9002000610064006100740074006900200070006500720020006c0061002000760069007300750061006c0069007a007a0061007a0069006f006e0065002000730075002000730063006800650072006d006f002c0020006c006100200070006f00730074006100200065006c0065007400740072006f006e0069006300610020006500200049006e007400650072006e00650074002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF753b97624e0a3067306e8868793a3001307e305f306f96fb5b5030e130fc30eb308430a430f330bf30fc30cd30c330c87d4c7531306790014fe13059308b305f3081306e002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b9069305730663044307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c306a308f305a300130d530a130a430eb30b530a430ba306f67005c0f9650306b306a308a307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020d654ba740020d45cc2dc002c0020c804c7900020ba54c77c002c0020c778d130b137c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b00790074006900200072006f006400790074006900200065006b00720061006e0065002c00200065006c002e002000700061016100740075006900200061007200200069006e007400650072006e0065007400750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f007400690020007201010064012b01610061006e0061006900200065006b00720101006e0101002c00200065002d00700061007300740061006d00200075006e00200069006e007400650072006e006500740061006d002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f007200200073006b006a00650072006d007600690073006e0069006e0067002c00200065002d0070006f007300740020006f006700200049006e007400650072006e006500740074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079015b0077006900650074006c0061006e006900610020006e006100200065006b00720061006e00690065002c0020007700790073007901420061006e0069006100200070006f0063007a0074010500200065006c0065006b00740072006f006e00690063007a006e01050020006f00720061007a00200064006c006100200069006e007400650072006e006500740075002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200065007800690062006900e700e3006f0020006e0061002000740065006c0061002c0020007000610072006100200065002d006d00610069006c007300200065002000700061007200610020006100200049006e007400650072006e00650074002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020006100660069015f006100720065006100200070006500200065006300720061006e002c0020007400720069006d0069007400650072006500610020007000720069006e00200065002d006d00610069006c0020015f0069002000700065006e00740072007500200049006e007400650072006e00650074002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f0020044d043a04400430043d043d043e0433043e0020043f0440043e0441043c043e044204400430002c0020043f0435044004350441044b043b043a04380020043f043e0020044d043b0435043a04420440043e043d043d043e04390020043f043e044704420435002004380020044004300437043c043504490435043d0438044f0020043200200418043d044204350440043d043504420435002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020007a006f006200720061007a006f00760061006e006900650020006e00610020006f006200720061007a006f0076006b0065002c00200070006f007300690065006c0061006e0069006500200065002d006d00610069006c006f006d002000610020006e006100200049006e007400650072006e00650074002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f01610074006f00200069006e00200069006e007400650072006e00650074002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e40020006e00e40079007400f60073007400e40020006c0075006b0065006d0069007300650065006e002c0020007300e40068006b00f60070006f0073007400690069006e0020006a006100200049006e007400650072006e0065007400690069006e0020007400610072006b006f006900740065007400740075006a0061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f6007200200061007400740020007600690073006100730020007000e500200073006b00e40072006d002c0020006900200065002d0070006f007300740020006f006300680020007000e500200049006e007400650072006e00650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF0045006b00720061006e002000fc0073007400fc0020006700f6007200fc006e00fc006d00fc002c00200065002d0070006f00730074006100200076006500200069006e007400650072006e006500740020006900e70069006e00200065006e00200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f0062006100740020007600650020004100630072006f006200610074002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a0456043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043f0435044004350433043b044f043404430020043700200435043a04400430043d044300200442043000200406043d044204350440043d043504420443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s> |
<s>Does Word2Vec encode human perception of similarityƒ A study in BanglaDoes Word2Vec encode human perception ofsimilarity? A study in BanglaManjira SinhaCenter for Education TechnologyIIT KharagpurKharagpur, Indiamanjira87@gmail.comRakesh DuttaDept. of Computer Science and ApplicationUniversity of North BengalSiliguri, Indiarakeshhijli@gmail.comTirthankar DasguptaInnovation LabTata Consultancy Services,Kolkata, Indiaiamtirthankar@gmail.comAbstract—The quest to understand how languageand concepts are organized in human mind is a never-ending pursuit undertaken by researchers in computa-tional psycholinguistics; simultaneously, on the otherhand, researchers have tried to quantitatively modelthe semantic space from written corpora and discoursesthrough different computational approaches - whileboth of these interacts with each other in-terms ofunderstanding human processing through computa-tional linguistics and enhancing NLP methods fromthe insights, it has seldom been systematically studiedif the two corroborates each other. In this paper, wehave explored how and if the standard word embed-ding based semantic representation models representthe human mental lexicon. Towards that, We haveconducted a semantic priming experiment to capturethe psycholinguistics aspects and compared the resultswith a distributional word-embedding model: Banglaword2Vec. Analysis of reaction time indicates thatcorpus-based semantic similarity measures do not re-flect the true nature of mental representation andprocessing of words. To the best of our knowledge this isfirst of a kind study in any language especially Bangla.Index Terms—Computational Psycholinguistics, Se-mantic Priming, Mental Lexicon, Response Time, De-gree of Priming, Word2vec model.I. IntroductionThe representation of words in the human mind is oftentermed as Mental lexicon (ML) [1]–[3]. Although, wordshave various degree of association that are governed bydifferent linguistic and cognitive constraints, the precisenature of the interconnection in the mental lexicon is notclear and remains a subject of debate over the past fewdecades. However, a clear understanding of how words arerepresented and processed at the mental lexicon will notonly help enhance our knowledge of the inherent cognitiveprocessing but may also be used to develop cognitivelyaware Natural Language Processing(NLP) applications.A lot of cognitive experiments are being carried out indifferent languages to study the semantic representationof words in the ML. Typically, semantic priming experi-ments are used to perform such studies [4], [5]. Priminginvolves exposure of a stimulus (called prime) that resultsin quicker recognition of a related stimulus (called theTarget). For example, the amount of time required torecognize a word BRANCH will be small if it is precededby a related stimulus like TREE as compared to anunrelated stimulus like HOUSE. The data collected fromsuch cognitive experiments are then used to develop robustcomputational models of representation and processing ofwords in the ML.Attempts have also been made to provide computationalmodels of representation of semantically similar wordsin the mental lexicon. Among them, one of the mostcommonly used is the distributional semantic model ofrepresentation. The idea behind distributional semanticsis that words with similar meanings are used in similarcontexts [6]. Thus, semantic relatedness can be measuredby looking at the similarity between word co-occurrencepatterns in text corpora. In linguistics, this idea has beena useful line of research for the past couple of decades [7],[8]. Recently, due to the advancements in the area of deepneural networks, a more robust form of distributional se-mantics models have been proposed in the way of word2vecword embedding. However, most of these</s> |
<s>works are basedon languages like English, French, German, Arabic, andItalian. Very few attempts have been made to performsuch studies for Bangla ML. Moreover, an importantaspect of a study is to verify the fact of whether thedistributional approach of words representation is trulythe way human mental lexicon works.In this research, we have explored how and if thestandard word embedding based semantic representationmodels represent the human mental lexicon. The paperdiscusses various cases where it has been shown that highlysimilar words identified by the computational models sel-dom shows any priming effect by the users. On the otherhand, word pairs having low corpus similarity may resultin a high degree of priming.The objectives of the paper are as follows:• We conduct a semantic priming experiment over aset of 300-word pairs to study the organization andrepresentation of Bangla words in the mental lexicon.• For each of the 300 Bangla word pairs, we havecomputed their word embedding using the word2vec978-1-7281-5241-7/19/$31.00 ©2019 IEEEl C/ICAuthorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 08:57:43 UTC from IEEE Xplore. Restrictions apply. technique and computed the semantic distance be-tween each of the word pairs.• We try to observe whether there exist any correlationbetween the computed similarity score and the prim-ing effect. Here, the null hypothesis is words havinghigh cosine similarity scores will show a high degreeof priming.II. Semantic Priming Experiment on BanglaSemantically Similar WordsIn order to study the effect of priming on semanticallyrelated words in Bangla, we have execute the maskedpriming experiment as discussed in [9]–[11]. In this tech-nique, the prime word (say chor (thief)) is placed betweena forward pattern mask and the target stimulus (saypolisa (Police)), which acts as a backward mask. This isillustrated below.prime(1000ms)chora(THIEF ) −→target(2000ms)pulisa(POLICE)Once the target porbe is presented to the participants for agiven period of time, they are asked to decide whether thegiven target word is valid or not. The participants entertheir decision by pressing the key ’J’ (for valid words) and’K’ (for invalid words) of a standard QWERTY keyboard.The time taken to press any one of the key after thedisplay of the target word (also called the response time(RT)), is recorded by the system timer. We display thesame target word once again but with a different visualprobe called the CONTROL word. The CONTROL worddo not show any semantic, or orthographic similarity witheither the prime or the target word. For example, baYaska(aged) and briddha (aged) is a prime-target pair, and thecorresponding control-target pair could be naYana (eye)and briddha (aged). We use the DMDX software tool1 toconduct all the experiments conducted in this work.A. Data and ExperimentWe choose 300 word pairs from a Bangla corpus ofaround 3.2 million unique words 2. The corpus consistof the literary works of famous Bangla authors; we havealso extended the corpus by adding texts from BanglaWikipedia, News sources, and Blogs. The words are chosenis such a way that they represents a substantial amountof distribution over the entire corpus. This will further beuseful to construct the word embedding for computing thesemantic similarity.B. Implicit Perception of Semantic Similarity1) Methods and</s> |
<s>Material: There were 300 prime-target pairs classified into two different classes. Foreach of the targets a prime and a control word havebeen chosen. Class-I primes have high degree of relat-edness (e.g. সূয(Surya(sun)) – অ (asta(sunset))), where1http://www.u.arizona.edu/ kforster/dmdx/download.htm2obtained from www.snltr.orgas class-II primes have a low degree of relatedness (ছা-গল(Chagal(goat)) – অ (asta(sunset))).The controls, do not possess any semantic, orthographicor morphological relationship with the target word. It isimportant to restrict the subject to make any strategicalguess regarding the relation between pairs of words. Thus,the prime-target and the control-target words were mixedwith equal number of fillers, which are out of vocabularywords such as non-words.2) Participants: The experiments were conducted on100 native Bangla speakers with at least a graduationdegree. The age of the subjects varies between 20 to 30years.3) Result: Extreme reaction time and incorrect re-sponses of the RT in the lexical decision (about 7.5%) werenot included in the latency analysis. We set extreme reac-tion time for one subject as the median lexical latency ofthat essence subject. Table I depicts the average reactiontime (RT) of some prime-target and control-target wordpairs.Table IThe average reaction time (RT) of the prime-target andcontrol-target word pairs.Prime Word Target Word Control Word Average RT(P-T) Average RT(C-T)চার ডাকাত জল 548.62 786.89লাভ লাভী ভয় 636.71 716.80রাগ রাগী বাতাস 700.14 821.81জল বায়ু বই 612.81 726.96িবচার িবচারক জামা 669.99 809.42Then we calculate the degree of priming (DOP) i.e. theaverage reaction time (control-target) minus the averagereaction time (prime-target) words pairs for each wordpair. This is represented as follows:DegreeofPriming(DOP ) = Diff(RT (CT ), RT (PT ))(1)After getting the result of DOP, we again calculate theaverage of DOP across all users.III. Distributional Approach towardsMeasuring Semantic SimilarityRecently, word embedding techniques proposed byMikolov and et al. (2013a) [12] argued that neural networkbased word embedding (word2vec) models (see Figure 1)are efficient at creating robust semantic spaces. Typicallyword embeddings are computed by using two techniques:a) Skip Gram model and b) Continuous Bag of Wordmodel (CBOW). The skip gram model in one hand,considers a central word and tries to predict the contextwords that best fits the target word. On the other hand,the CBOW model tries to predict a target word giventhe context words. Consider for example the sentence:ক প একধরেনর সরীসৃপ যারা পািন এবং ডাঙা দইু জায়গােতইবাস কের।.Let ক প�be the input to the proposed neural network.Our objective here is to predict a target word সরীসৃপ usingAuthorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 08:57:43 UTC from IEEE Xplore. Restrictions apply. Figure 1. Illustration of the Word2Vec Modelthe single context input word ক প. We use the one hotencoding of the input word and measure the output errorcompared to one hot encoding of the target word. Thus,the vector representation of the target word is learnedby learning to predict the same target word. The basicarchitecture of the CBOW model is depicted in Figure 1.We have computed the word2vec embedding of eachof the words present in the 300 prime-target, control-target pairs. We use some pre-trained word2vec modelsi.e model-2 3 and model-3 4, and also trained word2vecand generated embedding</s> |
<s>with different dimension onthe corpus discussed in section II-A (model 1). Fromthe different sets of embedding, we have computed thesemantic similarity between each of the word pairs usingthe cosine similarity measure:Cos(wi, wj) =w̄i.w̄j|wi||wj |(2)Next, we compared the scores with the DOP collected fromthe priming experiment. Table II depicts the correlationbetween the different embedding models and the reactiontimes obtained from the priming experiment.Table IIComparison table with different models with respect tothe human annotated Data.Model name VOCAB Dimension Co-relationModel 1 427261 300 -0.2422Model 1 427261 400 -0.2104Model 2 10059 200 -0.0994Model 3 145350 300 0.0203From Table II we observed that there is a very week cor-relation between the Word2Vec models and the ReactionTime data.3https://github.com/Kyubyong/wordvectors4https://fasttext.cc/docs/en/crawl-vectors.htmlTable IIISimilarity score of Human annotated (DOP) vs Word2vecannotated.Word1 Word2 Control Word Degree of priming Word2Vecচার ডাকাত জল 238.28 0.3236দশ দশী জানালা 194.61 0.2549িবদ ালয় ছাএ পািখ 127.10 0.3771উে শ িবেধয় সঁাতার 309.81 0.2678ইিতহাস ভেগাল বালক 197.67 0.4535রাম রাবণ রঁাধুিন 160.19 0.4060তঁাতী কাপড় রিব 153.19 0.3686ভার আেলা মািঝ 164.45 0.3758িবদু ৎ গজন দয়া 131.00 0.2584িপয়ন িচ চঁাদ 166.10 0.3292রাত ডানা 189.92 0.2880মধুসূদন রবী নাথ মাথা -22.73 0.7273ীপ উপ ীপ ফল -15.11 0.6499গ াস বা বািড় -85.44 0.5529We observed from Table III two types of anomaliesbetween the similarity score as obtained from word2vecand the priming result (or similarity perceived by humanas recorded through psycholinguistics experiments): someword pairs which have high degree of priming but lowword2vec similarity score and some word pairs having lowdegree of priming but high degree of word2vec similarityscore. From the types of word pairs it is apparent thatsome word pairs, which we use together in our daily livesand colloquial use such as, িবদু ৎ and গজন . We also tendto foster a strong mental connection among them, that isreflected by the high DOP. On the other hand, as theseword pairs are less likely to co-occur in a formal writtencorpus, they have low cosine similarity scores. Similarly,certain word pairs such as মধুসূদন and রবী নাথ or গ াসand বা have high word similarity score as they are likelyto co-occur often due to their categorizations (first pairis names of poets and second pair is physics concepts),but not so much in day-to-day usage. Hence, our nullhypothesis is proven to be wrong and we can infer thatsubstantial gap still exists between how we process wordsand how computational approaches think we do.IV. Conclusions and DiscussionIn this paper we aim to study the representation andprocessing of semantically similar words in Bangla mentallexicon. Accordingly, we have conducted semantic primingexperiment to determine the reaction time of subjects forprime-target and control-target pairs. We further com-puted the semantic similarity between the same wordpairs using the Bangla word2Vec based word embeddingmodel. We have compared the standard word embeddingbased semantic representation models correctly reflects theorganization and processing of the mental lexicon. Analy-sis of reaction time indicates that corpus based semanticsimilarity measures does not reflects the true nature ofmental representation and processing of words. The paperdiscusses various cases where it has been shown that highlysimilar words seldom shows any priming effect by the usersAuthorized licensed use limited to: Western Sydney</s> |
<s>University. Downloaded on July 26,2020 at 08:57:43 UTC from IEEE Xplore. Restrictions apply. whereas word pairs having low corpus similarity may resultin high degree of priming.It is clear that the existing word embedding models areprimarily based on the underline corpus on which theyhave been trained. Therefore, in order to understand theword representation strategies, ideally the corpus mustbear a close resemblance with the human spoken form.However, such an ideal condition must always have to beapproximated.In particular, most of the natural language humans areexposed with belong to the spoken form. In order to usesuch data, it is required to do manual transcription of thespoken forms into their respective textual representations.This is not only time consuming but requires a hugeman-power effort. On the other hand, the typical textualmodels that presently exists are based on written languagethat are available in the open web. Although, they areavailable in plenty, they are often less representative ofthe actual language input. In a recent work, Brysbaertet al. (2011) [13] showed that word frequency measuresbased on a corpus of 50 million words from subtitlespredicted the lexical decision times of the English LexiconProject [14] better than the Google frequencies based ona corpus of hundreds of billions words from books. Similarfindings were reported for German [15]. In particular,word frequencies derived from non-fiction, academic textsperform worse [16].AcknowledgmentThe graduate students of Vidyasagar University andthe twelve standard students of Gopali I.M.high school inIndia have actively helped us to performed these psycholin-guistic experiments. So, we thank to all the participants.References[1] J. Grainger, P. Colé, and J. Segui, “Masked morphologicalpriming in visual word recognition,” Journal of memory andlanguage, vol. 30, no. 3, pp. 370–384, 1991.[2] E. Drews and P. Zwitserlood, “Morphological and orthographicsimilarity in visual word recognition.” Journal of ExperimentalPsychology: Human Perception and Performance, vol. 21, no. 5,p. 1098, 1995.[3] M. Taft, “Morphological decomposition and the reverse basefrequency effect,” The Quarterly Journal of Experimental Psy-chology Section A, vol. 57, no. 4, pp. 745–765, 2004.[4] S. Dehaene, L. Naccache, G. Le Clec’H, E. Koechlin, M. Mueller,G. Dehaene-Lambertz, P.-F. van de Moortele, and D. Le Bihan,“Imaging unconscious semantic priming,” Nature, vol. 395, no.6702, p. 597, 1998.[5] J. H. Neely, “Semantic priming and retrieval from lexical mem-ory: Roles of inhibitionless spreading activation and limited-capacity attention.” Journal of experimental psychology: gen-eral, vol. 106, no. 3, p. 226, 1977.[6] Z. S. Harris, “Distributional structure,” Word, vol. 10, no. 2-3,pp. 146–162, 1954.[7] K. Lund and C. Burgess, “Producing high-dimensional semanticspaces from lexical co-occurrence,” Behavior research methods,instruments, & computers, vol. 28, no. 2, pp. 203–208, 1996.[8] T. K. Landauer and S. T. Dumais, “A solution to plato’sproblem: The latent semantic analysis theory of acquisition, in-duction, and representation of knowledge.” Psychological review,vol. 104, no. 2, p. 211, 1997.[9] K. I. Forster and C. Davis, “Repetition priming and frequencyattenuation in lexical access.” Journal of experimental psychol-ogy: Learning, Memory, and Cognition, vol. 10, no. 4, p. 680,1984.[10] K. Rastle, M. H. Davis, W. D. Marslen-Wilson, and L. K. Tyler,“Morphological and semantic effects in visual word recognition:A time-course study,” Language and cognitive processes, vol. 15,no. 4-5, pp.</s> |
<s>507–537, 2000.[11] W. D. Marslen-Wilson, M. Bozic, and B. Randall, “Early decom-position in visual word recognition: Dissociating morphology,form, and meaning,” Language and Cognitive Processes, vol. 23,no. 3, pp. 394–421, 2008.[12] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient esti-mation of word representations in vector space,” arXiv preprintarXiv:1301.3781, 2013.[13] M. Brysbaert, E. Keuleers, and B. New, “Assessing the use-fulness of google books’ word frequencies for psycholinguisticresearch on word processing,” Frontiers in Psychology, vol. 2,p. 27, 2011.[14] D. A. Balota, M. J. Yap, K. A. Hutchison, M. J. Cortese,B. Kessler, B. Loftis, J. H. Neely, D. L. Nelson, G. B. Simp-son, and R. Treiman, “The english lexicon project,” Behaviorresearch methods, vol. 39, no. 3, pp. 445–459, 2007.[15] M. Brysbaert, M. Buchmeier, M. Conrad, A. M. Jacobs,J. Bölte, and A. Böhl, “The word frequency effect,” Experimen-tal psychology, 2011.[16] M. Brysbaert, B. New, and E. Keuleers, “Adding part-of-speechinformation to the subtlex-us word frequencies,” Behavior re-search methods, vol. 44, no. 4, pp. 991–997, 2012.Authorized licensed use limited to: Western Sydney University. Downloaded on July 26,2020 at 08:57:43 UTC from IEEE Xplore. Restrictions apply.</s> |
<s>Microsoft Word - finalAutomatic Scoring of Bangla Language Essay Using Generalized Latent Semantic Analysis Submitted by Md. Monjurul Islam Student ID: 040505053F A thesis submitted to the Department of Computer Science and Engineering in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN ENGINEERING IN COMPUTER SCIENCE AND ENGINEERING Supervised by Dr. A. S. M. Latiful Hoque Associate Professor, Department of CSE, BUET Department of Computer Science and Engineering BANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY Dhaka, Bangladesh March, 2011 The thesis “Automatic Scoring of Bangla Language Essay Using Generalized Latent Semantic Analysis”, submitted by Md. Monjurul Islam, Roll No. 040505053F, Session: April 2005, to the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, has been accepted as satisfactory for the partial fulfillment of the requirements for the degree of Master of Science in Engineering (Computer Science and Engineering) and approved as to its style and contents. Examination held on March 20, 2011. Board of Examiners 1. ________________________________ Dr. A. S. M. Latiful Hoque Associate Professor, Department of CSE BUET, Dhaka-1000 Chairman (Supervisor) 2. ________________________________ Dr. Md. Monirul Islam Professor and Head, Department of CSE BUET, Dhaka–1000 Member (Ex–officio) 3. ________________________________ Dr. Md. Mostofa Akbar Professor, Department of CSE BUET, Dhaka–1000 Member 4. ________________________________ Dr. Mohammad Mahfuzul Islam Associate Professor, Department of CSE BUET, Dhaka–1000 Member 5. ________________________________ Dr. Shazzad Hosain Assistant Professor, Department of EECS North South University, Dhaka-1229 Member (External) Declaration I, hereby, declare that the work presented in this thesis is the outcome of the investigation performed by me under the supervision of Dr. A. S. M. Latiful Hoque, Associate Professor, Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka. I also declare that no part of this thesis and thereof has been or is being submitted elsewhere for the award of any degree. (Md. Monjurul Islam) III Acknowledgement First I express my heartiest thanks and gratefulness to Almighty Allah for His divine blessings, which made me possible to complete this thesis successfully. I feel grateful to and wish to acknowledge my profound indebtedness to Dr. A. S. M. Latiful Hoque, Associate Professor, Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology. Deep knowledge and keen interest of Dr. A. S. M. Latiful Hoque in the field of information retrieval influenced me to carry out this thesis. His endless patience, scholarly guidance, continual encouragement, constructive criticism and constant supervision have made it possible to complete this thesis. I also express my gratitude to Professor Dr. Md. Monirul Islam, Head of the Department of Computer Science and Engineering, BUET for providing me enough lab facilities to make necessary experiments of my research in the Graduate lab of BUET. I would like to thank the members of the Examination committee, Dr. Md. Mostofa Akbar, Professor, Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dr. Mohammad Mahfuzul Islam, Associate Professor, Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology</s> |
<s>and Dr. Shazzad Hosain, Assistant Professor, Department of Electrical Engineering & Computer Science, North South University, Dhaka for their helpful suggestions and careful review of this thesis. I would like to convey gratitude to all my course teachers whose teaching helps me a lot to start and complete this thesis work. I am also grateful to Md. Aman-Ur-Rashid, Bangla language teacher of Engineering University High School, Dhaka for providing pregraded answer scripts containing Bangla essays and narrative answers for testing my thesis work. Lastly I am also grateful to my family and colleagues for giving me continuous support.Abstract Automated Essay Grading (AEG) is a very important research area in educational assessment. Several AEG systems have been developed using statistical, Bayesian Text Classification Technique, Natural Language Processing (NLP), Artificial Intelligence (AI), and amongst many others. Latent Semantic Analysis (LSA) is an information retrieval technique used for automated essay grading. LSA forms a word by document matrix and the matrix is decomposed using Singular Value Decomposition (SVD) technique. It does not consider the word order in a sentence. Existing AEG systems based on LSA cannot achieve higher level of performance to be a replica of human grader. Moreover most of the essay grading systems are used for grading pure English essays or essays written in pure European languages. We have developed a Bangla essay grading system using Generalized Latent Semantic Analysis (GLSA) which uses n-gram by document matrix instead of word by document matrix of LSA. We have also developed an architecture for training essay set generation and evaluation of submitted essays by using the training essays. We have evaluated this system using real and synthetic datasets. We have developed training essay sets for three domains: standard Bangla essays titled “বাংলােদেশর sাধীনতা সংgাম”, “কািরগির িশkা” and descriptive answers of S.S.C level Bangla literature. We have gained 89% to 95% accuracy compared to human grader. This accuracy level is higher than that of the existing AEG systems. Contents Declaration................................................................................................................................ I Acknowledgement ................................................................................................................. III Abstract .................................................................................................................................. IV Contents ................................................................................................................................... V List of Tables ...................................................................................................................... VIII Chapter 1: Introduction .......................................................................................................... 1 1.1 Background ................................................................................................................... 1 1.2 Problem Definition ........................................................................................................ 2 1.3 Objectives ...................................................................................................................... 2 1.4 Overview of the Thesis ................................................................................................. 2 1.5 Organization of the Thesis ............................................................................................ 3 Chapter 2: Literature Review ................................................................................................. 4 2.1 Project Essay Grader (PEG) .......................................................................................... 4 2.2 Arabic Essay Scoring System ....................................................................................... 4 2.3 E-Rater .......................................................................................................................... 5 2.4 IntelliMetric ................................................................................................................... 6 2.5 Bayesian Essay Test Scoring System (BETSY) ........................................................... 6 2.6 KNN Approach ............................................................................................................. 7 2.7 Latent Semantic Analysis based AEG techniques ........................................................ 8 2.7.1 What is Latent Semantic Analysis (LSA)? .......................................................... 8 2.7.2 Automatic Thai-language Essay Scoring using Artificial Neural Networks (ANN) and LSA.......................................................................................................... 10 2.7.2 Automated Japanese Essay Scoring System: JESS ......................................... 10 2.7.3 Apex................................................................................................................... 11 2.7.4 Intelligent Essay Assessor (IEA) ....................................................................... 11 2.8 Summary ..................................................................................................................... 11 Chapter 3: AEG with GLSA: System Architecture and Analysis ..................................... 13 3.1 Training Essay Set Generation .................................................................................... 14 3.1.1 Preprocessing the Training Bangla Essays ........................................................ 15 3.1.2 n-grams by Document Matrix</s> |
<s>Creation ............................................................. 15 3.1.3 Compute the SVD of n-gram by Document Matrix .......................................... 16 3.1.4 Dimensionality Reduction of the SVD Matrices ............................................... 18 3.1.5 Human Grading of Training Essays .................................................................. 18 3.1.6 Essay Set Generation ......................................................................................... 19 3.2 The Evaluation of Submitted Essay ............................................................................ 19 3.2.1 Grammatical Errors Checking ........................................................................... 19 3.2.2 Preprocessing of Submitted Essay ..................................................................... 19 3.2.3 Query Vector Creation....................................................................................... 20 3.2.4 Assigning Grades to the Submitted Essays using Cosine Similarity ................ 20 3.3 The Evaluation of ABESS ........................................................................................... 22 3.4 Analysis of AEG with GLSA ...................................................................................... 22 3.5 An illustrative example ............................................................................................... 27 3.5.1 n–gram by Document Matrix Creation .............................................................. 29 3.5.2 Truncation of SVD Matrices ............................................................................. 31 3.5.3 Evaluation of submitted answer ........................................................................ 34 Chapter 4: Simulation ........................................................................................................... 37 4.1 Experimental Environment ......................................................................................... 37 4.2 Dataset Used for Testing ABESS ................................................................................ 37 4.3 Evaluation Methodology ............................................................................................. 38 4.4 Simulation Results....................................................................................................... 38 4.4.1 Testing ABESS by Using True Positive, False positive, True Negative and False Negative ............................................................................................................ 45 4.4.2 Testing ABESS by Using Precision, Recall and F1- measure .......................... 51 Chapter 5: Conclusion ........................................................................................................... 57 5.1 Contributions ............................................................................................................... 57 5.2 Suggestions for Future Research ................................................................................. 58 Related Publication:............................................................................................................... 58 References ............................................................................................................................... 59 VII List of Figures Fig. 3.1: Overall framework of ABESS .................................................................................. 14 Fig. 3.2: Training essay set generation .................................................................................... 15 Fig. 3.3: The SVD of matrix .................................................................................................... 17 Fig. 3.4: The truncation of SVD matrices ............................................................................... 18 Fig. 3.5: ABESS Evaluation of submitted essay ..................................................................... 19 Fig. 3.6: Query matrix (q) ........................................................................................................ 20 Fig. 3.7: Angle between document vector and query vector ................................................... 21 Fig. 3.8: The evaluation of ABESS ......................................................................................... 22 Fig. 4.1: Grade point mapping from human to ABESS for synthetic essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ............................................................................. 43 Fig 4.2: Mapping of grades from human grades to ABESS for essay “কািরগির িশkা” (Karigori Shikkha) ................................................................................................................................... 43 Fig. 4.3: Mapping of grades from human grades to ABESS for narrative answers ................ 44 of SSC level Bangla literature.................................................................................................. 44 Fig. 4.4: Comparison of human grade and ABESS for the essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram). ...................................................................................... 47 Fig. 4.5: Number of essays missed and spurious by ABESS for the essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ....................................................................................... 47 Fig. 4.6: True positive, false positive and false negative of ABESS test result for the Essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ...................................................... 48 Fig. 4.7: Comparison of human grade and ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) ................................................................................................................................... 49 Fig. 4.8: Number of essays missed and spurious by ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) ................................................................................................................... 49 Fig. 4.9: True positive, false positive and false negative of ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) ................................................................................................................... 50 Fig. 4.10: Precision, recall of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) ................... 54 Fig. 4.11: Precision and recall of ABESS for the narrative answer ........................................ 55 VIII List of Tables Table 3.1: Training answers with corresponding grades ........................................................ 27 Table 3.2: List of selected n-grams for indexing .................................................................... 28 Table 3.3: Weighting scheme..................................................................................................</s> |
<s>28 Table 3.4: n–gram by document matrix creation .................................................................... 29 Table 3.5: Creation of document matrix for essay 1E ........................................................... 33 Table 3.6: Query matrix for submitted answer ....................................................................... 35 Table 3.7: Cosine similarity between document vector and query vector .............................. 36 Table 4.1: The students’ submitted data set ............................................................................ 38 Table 4.2: Grade point according to obtained marks .............................................................. 38 Table 4.3: Difference between teacher grade and ABESS grade for “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ....................................................................................... 39 Table 4.4: Comparison of human grade and ABESS grade for essay “কািরগির িশkা” ................. 40 (Karigori Shikkha) ................................................................................................................... 40 Table 4.5: Comparison of human grade and ABESS grade for the narrative answer ............. 42 Table 4.6: True positive, false positive, true negative and false negative .............................. 45 Table 4.7: True positive, true negative, false positive and false negative of ABESS for essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ................................................... 46 Table 4.8: True positive, true negative, false positive and false negative of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) ............................................................................................... 48 Table 4.9: True positive, true negative, false positive and false negative of ABESS for narrative answers of SSC level Bangla literature .................................................................... 50 Table 4.10: Precision and recall of ABESS for synthetic essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) ....................................................................................... 53 Table 4.11: Precision and recall of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) ......... 54 Table 4.12: Precision and recall of ABESS for the narrative answer .................................... 55 Table 4.13: Comparison between the performances of four AEG approaches ....................... 56 Chapter 1 Introduction Assessment is considered to play a central role in the educational process. Assessing students’ writing is one of the most expensive and time consuming activity for educational system. The interest in the development and the use of automated assessment system has grown exponentially in the last few years. Most of the automated assessment tools are based on objective type questions: i.e. multiple choice questions, short answer, selection/association, hot spot, true/false and visual identification. Multiple choice examinations are easy to grade by a computer. This question format is widely criticized, because it allows students to blindly guess the correct answer. It may also reduce the writing skills of students. On the other hand, essay questions, the most useful tool to assess learning outcomes, implying the ability to organize and integrate ideas, the ability to express oneself in writing. Assessing students’ essays and providing thoughtful feedback is time consuming. This issue may be resolved through the adoption of Automated Essay Grading (AEG) system. Until recently, little thought has been given to the idea of automating essay scoring process. It is necessary to develop an AEG system that can be a replica of human grader. 1.1 Background Automatic grading of essays is substantially more demanding. Research has been doing on this work since the 1960’s. Several AEG systems have been developed under academic and commercial initiative using statistical [1], [2], Natural Language Processing (NLP) [3], [4], Bayesian text classification technique [5], K-nearest Neighboring (KNN) technique [6], Information Retrieval (IR)</s> |
<s>technique [7]–[11], Artificial Intelligence (AI) [8], and amongst many others. The available systems are Project Essay Grade (PEG) based on the surface characteristics of the essay such as the length in words and the numbers of commas and does not consider the content of the essay [1], [12], Electronic Essay Rater (ERater) based on statistical and NLP technique [3], [12], BESTSY based on Bayesian text classification technique which uses Bernoulli Model (BM) and Multivariate Bernoulli Model (MBM) [5], Intelligent Essay Assessor (IEA), uses Latent Semantic Analysis (LSA) technique which is based on IR technique [12], and JESS is based on L [10]. The accuracy of the existing systems is maximum 91%. Many of the existing systems are applicable for only short essay [2], [6]. Most of the existing AEG systems are applicable for English language only [13]. No work has been found for grading Bangla language essay. 1.2 Problem Definition One important criterion for AEG system is accuracy; how much the grade given by the computer is close to the human grader? Existing AEG system focused on the mechanical properties- grammar, spelling, punctuation, and on simple stylistic features, such as wordiness and overuse of the passive voice. However, syntax and style alone are not sufficient to judge the merit of the essay. The earliest approaches, especially PEG, were based solely on the surface characteristics of the essay such as the length in words and the numbers of commas. LSA is a new statistically based technique for comparing the semantic similarity of texts [14]–[18]. Moreover, the existing AEG techniques which are using LSA do not consider the word sequence in the documents. In existing LSA methods the creation of word by document matrix is somewhat arbitrary. Automated essay grading by using these methods are not a replica of human grader. 1.3 Objectives The objectives of the thesis are to: • develop a Bangla essays grading system, • design algorithms for preprocessing of essays for removing stopwords and stemming words to their stems or roots, • score the preprocessed essay using Generalized Latent Semantic Analysis (GLSA), • measure reliability of the new AEG technique, and • compare performance of the technique with existing essay grading techniques. 1.4 Overview of the Thesis LSA is a technique that was originally designed for indexing documents and text retrieval. LSA represents documents and their word content in a large two-dimensional matrix. Using a matrix algebra technique known as Singular Value Decomposition (SVD), the matrix is decomposed and new relationships between words and documents are uncovered, and existing relationship are modified to more accurately representing their true significance [17]. The SVD is then truncated for reducing the errors [19]. The existing word by document matrix that is used in LSA, does not consider word order in a sentence. Here the formation of word by document matrix the word pair “carbon dioxide” makes the same result of “dioxide carbon”. This problem is called proximity. We have proposed Generalized Latent Semantic Analysis (GLSA) technique to handle proximity in essay</s> |
<s>grading system. In GLSA n-gram by document matrix is created instead of a word by document matrix of LSA [20]. According to GLSA, a bi-gram vector for “carbon dioxide” is atomic, rather than the combination of “carbon” and “dioxide”. The GLSA preserve the proximity of word in a sentence. We have used GLSA because it generates clearer concept than LSA. At the same time the existing LSA technique is not directly applicable to Bangla essay grading. We have proposed an essay grading system named Automated Bangla Essay Scoring System (ABESS) using GLSA. 1.5 Organization of the Thesis The thesis is organized as follows: In Chapter 2, we have presented existing approaches to the automated assessment of essays. In Chapter 3, we have discussed system architecture and analysis of our developed model. The whole architecture is partitioned into two main parts: generation of training essay set and evaluation of submitted essays using training essay sets. In the analysis phase we have presented the algorithms for each step of generation of training essay set and the evaluation of submitted essays using training essay sets. In Chapter 4, we have evaluated the system using real and synthetic datasets. We have developed training essay sets for three domains: standard Bangla essays titled “বাংলােদেশর sাধীনতা সংgাম”, “কািরগির িশkা” and descriptive answers of S.S.C level Bangla literature. We have tested the system using these datasets. The result of our systems is compared with existing techniques. In the Chapter 5, we have concludes the thesis with contributions and further research directions. Chapter 2 Literature Review Automatic essay Grading (AEG) system is a very important research area for using technology in educational assessment. Researcher has been doing this job since the 1960’s and several models have been developed for AEG. 2.1 Project Essay Grader (PEG) Ellis Page developed Project Essay Grader (PEG) the first attempt at scoring essays by computer [1]. Page uses the terms trins and proxes for grading an essay; trins refer to the intrinsic variables such as fluency, diction, grammar, punctuation, etc., proxes denote the approximation (correlation) of the intrinsic variables . The scoring methodology of PEG con-tains a training stage and a scoring stage. PEG is trained on a sample of essays in the former stage. In the latter stage, proxes are determined for each essay and these variables are entered into the standard regression equation. The score for the trin in a previously unseen essay can then be predicted with the standard regression equation β+α=ii PScore(1) where α is a constant and k,...,, βββ 21 are the weights (i.e. regression coefficients) associated with the proxes 1P , 2P , 3P , . . . . , kP . Page’s latest experiments achieved results reaching a multiple regression correlation as high as 0.87 with human graders [12]. PEG does have its drawbacks, however. PEG purely relies on a statistical multiple regression technique which grades essays on the basis of writing quality, taking no account of content. PEG system needs to be trained for each</s> |
<s>essay set used. Page’s training data was typically several hundred essays comprising 50–67% of the total number. Moreover, PEG is susceptible to cheating. 2.2 Arabic Essay Scoring System Automatic Arabic online essay grading system was developed by Nahar et al. [2]. It uses statistical and computational linguistics techniques. According to this system model answer should be provided by Instructor. It is applicable for short essay. It is only designed for Arabic language. 2.3 E-Rater E-rater is an essay scoring system that was developed by Burstein et al. [3]. The basic technique of E-rater is identical to PEG. It uses statistical technique along with NLP technique. E-rater uses a vector-space model to measure semantic content. Vector-space model originally developed for use in IR, this model starts with a co-occurrence matrix where the rows represent terms and the columns represent documents. Terms may be any meaningful unit of information- usually words or short phrases and documents any unit of information containing terms, such as sentences, paragraphs, articles, essay or books. The value in a particular cell may be a simple binary 1 or 0 (indicating the presence or absence of the term in the document) or a natural number indicating the frequency with which the term occurs in the document. Typically, each cell value is adjusted with an information-theoretic transformation. Such transformations, widely used in IR, weight terms so that they more properly reflect their importance within the document. For example, one popular measure known as TF–IDF (term frequency–inverse document frequency) uses the following formula: NtfW ijij 2log= (2) here ijW is the weight of term i in document j, ijtf is the frequency of term i in document j, N is the total number of documents, and n is the number of documents in which i occurs. After the weighting, document vectors are compared with each other using some mathematical measure of vector similarity, such as the cosine coefficient between the documents A and Buses the following formula: B.A)BA()B,A(Cos i ii∑= (3) In e-rater’s case, each “document” of the co-occurrence matrix is the aggregation of pregraded essays which have received the same grade for content. The rows are composed of all words appearing in the essays, minus a “stop list” of words with negligible semantic content (a, the, of, etc.). After an optional information-theoretic weighting, a document vector for an ungraded essay is constructed in the same manner. Its cosine coefficients with all the pregraded essay vectors are computed. The essay receives as its “topicality” scores the grade of the group it most closely matches. E-Rater grades essays with 87% accuracy with human grader [12]. The E-rater cannot detect certain things, such as humor, spelling errors or grammar. It analyzes structure through using transitional phrases, paragraph changes, etc. It evaluates content through comparing ones score to that of other students. If anyone has a brilliant argument that uses an unusual argument style, the E-rater will not detect it. 2.4 IntelliMetric IntelliMetric was developed by Vantage Learning [4]. It uses a blend</s> |
<s>of AI, NLP and statistical technologies. CogniSearch is a system specifically developed for use with IntelliMetric to understand natural language to support essay scoring. IntelliMetric needs to be “trained” with a set of essays that have been scored beforehand including “known scores” determined by human expert raters. The system employs multiple steps to analyze essays. First, the system internalizes the known scores in a set of training essays. The second step includes testing the scoring model against a smaller set of essays with known scores for validation purposes. Finally, once the model scores the essays as desired, it is applied to new essays with unknown scores. Average Pearson correlation between human raters and the IntelliMetric system is .83 [4], [12]. 2.5 Bayesian Essay Test Scoring System (BETSY) Lawrence et al. developed BETSY that classifies text based on trained material [5]. Two Bayesian models are commonly used in the text classification literature. The two underlying models are the MBM and the BM. With the MBM each essay is viewed as a special case of all the calibrated features, and the probability of each score for a given essay is computed as the product of the probabilities of the features contained in the essay. Under the MBM, the probability essay di should receive score classification cj is ))]|(1)(1()|([)(jtitjtitji cwPBcwPBcdP −−+=∏(4) where v is the number of features in the vocabulary, )1,0(∈itB indicates whether feature t appears in essay i and )( ji cdP indicates the probability that feature wt appears in a document whose score is cj. For the multivariate Bernoulli model, )( jt cwP is the probability of feature tw appearing at least once in an essay whose score is cj. It is calculated from the training sample as jt DJcwP)( (5) where jD is the number of essays in the training group scored jc , and J is the number of score groups. The 1 in the numerator and J in the denominator are Laplacian values to adjust for the fact that this is a sample probability and to prevent )( jt cwPfrom equaling zero or unity. A zero value for )( jt cwP would dominate equation (4) and render the rest of the features useless. To score the trial essays, the probabilities that essay di should receive score classification cj given by equation (4) is multiplied by the prior probabilities and then normalized to yield the posterior probabilities. The score with the highest posterior probability is then assigned to the essay. With the multinomial model, each essay is viewed as a sample of all the calibrated terms. The probability of each score for a given essay is computed as the product of the probabilities of the features contained in the essay. This model can require a long time to compute since every term in the vocabulary needs to be examined. An accuracy of over 80% was achieved with the BETSY [5]. 2.6 KNN Approach Bin et al. designed an essay grading technique that uses text categorization model which incorporates</s> |
<s>KNN algorithm [6]. In KNN, each essay is transformed into Vector Space Model (VSM). First of all, essays are preprocessed by removing stopwords. Then the transformation takes place. The VSM can be represented as follows: )w,........,w,w,w(d mjj3j2j1j = (6) dj denotes the jth essay, and wij denotes the weight of the ith feature in jth essay, which represents the weight of the features. IDFTF − term-weighting method is used. The ),( jiIDFTF − of the ith coordinate of the jth transformed essay is as follows: log).,(),(iDFNjiTFjiIDFTF =− (7) The dimension reduction techniques are used since the dimensionality of vector space may be very high. Two methods are used for dimension reduction, term frequency (TF) and information gain (IG). The similarities of the test essay are computed with all of the training essays using cosine formula. The cosine formula is defined as follows: ∑ ∑= =kjkikjkiddSim1 1)()(),( (8) The result is sorted by decreasing order and selects the first k essays. Then the KNN classify the essay to the same category containing the most essays in those k essays. Using the KNN algorithm, a precision over 76% is achieved on the small corpus of text [6]. 2.7 Latent Semantic Analysis based AEG techniques 2.7.1 What is Latent Semantic Analysis (LSA)? LSA is a fully automatic mathematical / statistical IR technique that was originally designed for indexing documents and text retrieval [7], [15]. It is not a traditional natural language processing or artificial intelligence program; it uses no humanly constructed dictionaries, knowledge bases, semantic networks, grammars, syntactic parsers, and it takes as its input only raw text parsed into words defined as unique character strings and separated into meaningful passages or samples such as sentences or paragraphs. The first step of LSA is to represent the text as a word-document matrix in which each row stands for a unique word and each column stands for a text document or an essay or other context. Each cell contains the frequency with which the word of its row appears in the passage denoted by its column. Next, LSA applies singular value decomposition (SVD) to the matrix. In SVD, a rectangular matrix is decomposed into the product of three other matrices [17], [18]. The first of these matrices has the same number of rows as the original matrix, but has fewer columns. These n columns correspond to new, specially derived factors such that there is no correlation between any pair of them—in mathematical terms, they are linearly independent. The third matrix has the same number of columns as the original, but has only n rows, also linearly independent. In the middle is a diagonal n × n matrix of what are known as singular values. Its purpose is to scale the factors in the other two matrices such that when the three are multiplied, the original matrix is perfectly recomposed. Word-document co-occurance matrix dtA× is decomposed as follows: ndnnntdt VSUA ×××× ××= (9) where, A is a dt × word by documents matrix U is</s> |
<s>a nt × orthogonal matrix S is a nn × diagonal matrix V is a nd × orthogonal matrix The dimension of SVD matrices has been reduced. The purpose of the dimensionality reduction step is to reduce the noise and unimportant details in the data so that the underlying semantic structure can be used to compare the content of essays [19]. The dimensionality reduction operation has been done by removing one or more smallest singular values from singular matrix S and also deleted the same number of columns and rows from U and V, respectively. In this case, the product of the three matrices turns out to be a least-squares best fit to the original matrix. The following example illustrates this procedure; here, the n − k smallest singular values have been deleted from S. This effectively causes the dimensionality of U and VT to be reduced as well. The new product, Ak, still has t rows and d columns, but is only approximately equal to the original matrix At×d. The Ak can be defined as follows: kkktxd VSUAA ××=≈ (10) 2.7.2 Automatic Thai-language Essay Scoring using Artificial Neural Networks (ANN) and LSA Automated Thai-language essay scoring system was developed by Chanunya et al. [8]. In this method, at first, raw term frequency vectors of the essays and their corresponding human scores are used to train the neural network and obtain the machine scores. In the second step, LSA is used to preprocess the raw term frequency and then feeding them to the neural network. The experimental results show that the combination of LSA and ANN is effective in emulating human graders within the experimental conditions, and that the combination of both techniques is superior to ANN alone. 2.7.2 Automated Japanese Essay Scoring System: JESS JESS was developed for automated scoring of Japanese language essay [10]. The core element of JESS is Latent Semantic Indexing (LSI). LSI begins after performing SVD on t×d term-document matrix X (t: number of words; d: number of documents) indicating the frequency of words appearing in a sufficiently large number of documents. The process extracts diagonal elements from singular value matrix up to the kth element to form a new matrix S. Likewise; it extracts left and right hand SVD matrices up to the kth column to form new matrices T and D. Reduced SVD can be expressed as follows: TTSDX = (11) Here, X is an approximation of X with T and S being t×k and k×k square diagonal matrices, respectively, and DT a k×d matrix. Essay e to be scored can be expressed by t-dimension word vector xe based on morphological analysis, and using this, 1× k document vector de corresponding to a row in document space D can be derived as follows: 1−′= TSxd ee (12) Similarly, k-dimension vector dq corresponding to essay prompt q can be obtained. Similarity between these documents is denoted by r(de, dq), which can be given by the cosine of the angle formed between the two document</s> |
<s>vectors. JESS has been shown to be valid for essays in the range of 800 to 1600 characters. 2.7.3 Apex Assistant for Preparing Exams (Apex) was developed by Benoit et al. [11]. It relies on a semantic text analysis method called LSA. Apex is used to grade a student essay with respect to the text of a course; however it can also provide detailed assessments on the content. The environment is designed so that the student can select a topic, write an essay on that topic, get various assessments, then rewrite the text, submit it again, etc. The student submitted essays is compared with content of course and semantic similarity is produced by using LSA. Apex provides a message to the student according to the value of the similarity. The highest correlations .83 is found between Apex and human grader. 2.7.4 Intelligent Essay Assessor (IEA) IEA is an essay grading technique that was developed by Thomas et al. [14]. IEA is based on the LSA technique. According to IEA, a matrix for the essay document is built, and then transformed by the SVD technique to approximately reproduce the matrix using the reduced dimensional matrices built for the essay topic domain semantic space. The semantic space typically consists of human graded essays. Each essay to be graded is converted into a column vector, with the essay representing a new source with cell values based on the terms (rows) from the original matrix. Cosine similarity is used to calculate a similarity scores for the essay column vector relative to each column of the reduced term-document matrix. The essay's grade is determined by averaging the similarity scores from a predetermined number of sources with which it is most similar. IEA automatically assesses and critiques electronically submitted text essay. It supplies instantaneous feedback on the content and the quality of the student’s writing. A test conducted on GMAT essays using the IEA system resulted in percentages for adjacent agreement with human graders between 85%-91% [12]. 2.8 Summary This chapter described different types of existing AEG techniques. Existing AEG systems focused on the mechanical properties- grammar, spelling, punctuation, and on simple stylistic features, such as wordiness and overuse of the passive voice. However, syntax and style alone are not sufficient to judge the merit of the essay. We have thoroughly discussed the LSA based AEG techniques because we have taken LSA as the basis of our architecture. LSA is a new IR based statistical technique for comparing the semantic similarity of texts. The existing LSA based AEG techniques do not consider the word sequence in the documents. The creation of word by document matrix in LSA is somewhat arbitrary. In the next chapter we have discussed system architecture and analysis of our developed AEG technique. Chapter 3 AEG with GLSA: System Architecture and Analysis A number of researchers are active in developing specialized approaches and software systems for assessment of students’ submitted essays. Yet no solution exists for using computers to assess essay which acts</s> |
<s>as a replica of human grader. Moreover, most of the AEG systems are based on English language and no solution exists for Bangla language. We have developed a new approach for automated scoring of Bangla essays which is more accurate with human grader. We call our system as ABESS (Automatic Bangla Essay Scoring System). This section discusses our system architecture in details. We have developed our system using Generalized Latent Semantic Analysis (GLSA) technique which is more accurate and capable of grading Bangla language essays. Generally LSA represents documents and their word content in a large two-dimensional matrix semantic space. Using a matrix algebra technique known as SVD, new relationships between words and documents are uncovered, and existing relationship are modified to more accurately represent their true significance [17]. A matrix represents the words and their contexts. Each word represents a row in the matrix, while each column represents the sentences, paragraphs, and other subdivisions of the context in which the word occurs [18]. The traditional word by document matrix creation of LSA does not consider word sequence in a document. Here the formation of word by document matrix the word pair “carbon dioxide” makes the same result of “dioxide carbon”. We have developed our system by using GLSA. In GLSA n-gram by document matrix is created instead of a word by document matrix of LSA [20]. An n-gram is a subsequence of n items from a given sequence [21]–[23]. The items can be syllables, letters or words according to the application. In our architecture we have considered n-gram as a sequence of words. An n-gram of size 1 is referred to as a "unigram"; size 2 is a "bigram"; size 3 is a "trigram"; and size 4 is a “fourgram”; and size n is simply called an "n-gram". According to GLSA, a bi-gram vector for “carbon dioxide” is atomic, rather than the combination of “carbon” and “dioxide”. So, GLSA preserve the proximity of word in a sentence. We have used GLSA because it generates clearer concept than LSA. Our whole system architecture has been shown by the Fig. 3.1. There are three main modules of the system: the training essay set generation module, the ABESS grading module and the performance evaluation module. The system is trained using pregraded essays for a particular topic. The training essays are tuned by sample evaluation. In this evaluation process, some sample essays are graded by instructors and graded by ABESS using the training essays. The accuracy is measured. If the desired accuracy is obtained, the training essays are used for large scale essays evaluation. If desired accuracy is not met, more training essays are added to improve accuracy. Fig. 3.1: Overall framework of ABESS 3.1 Training Essay Set Generation The training essay set generation is shown by Fig. 3.2. We can select essays of a particular subject of any levels. The essays are graded first by more than one human experts of that subject. The number of human graders may increase for the</s> |
<s>non biased system. The average value of the human grades has been treated as training score of a particular training essay. Pregraded Bangla Essays TRAINING ESSAYS Training Essay Set Generation ABESS Evaluation Student System Evaluation Instructor Essay Set Submitted Essays Sample GradeEvaluation Report Feedback ABESS Submit Essay Fig. 3.2: Training essay set generation 3.1.1 Preprocessing the Training Bangla Essays We have preprocessed the training Bangla essays. Because document pre-processing improves results for information retrieval [25]. Preprocessing has been done in three steps: the stopwords removal, stemming the words to their roots and selecting n-gram index terms. 3.1.1.1 Stopword Removal In the stopwords removal step we have removed the most frequent words. We have removed the stopwords “e”, “ei”, “eবং”, “eর”, “িকn”, “o”, “তাi”, “আবার”, “েয”, “তেব”, “েস”, “তারপর” etc. from our Bangla essay. 3.1.1.2 Word Stemming After removing the stopwords we have stemmed the words to their roots. We have developed a word stemming heuristic for Bangla language. According to our stemming heuristic the word “বাংলােদেশর” is converted to “বাংলােদশ”, the word “পিৃথবীর” is converted to “পিৃথবী”, the word “িবমানবািহনীেক” is converted to “িবমানবািহনী” etc. 3.1.2 n-grams by Document Matrix Creation This is our main feature to overcome the drawbacks of LSA based AEG systems. We have created n-gram by document matrix instead of word by document matrix of LSA. Here each row of n-gram by document matrix is assigned by n-gram whereas each column is presented by a training essay. Unigram and its related n-grams and synonyms of unigram are grouped Training Bangla Essays Human Score Training Essay Score Stopwords Removal Word Stemming N-gram by document matrix Compute SVD Truncate SVD matrices Training Essay Sets (Vectors with Scores) Preprocessing Essay Vector Creation To Grading Submitted Essay for making index term for a row. Each cell of the matrix has been filled by the multiplication of frequency of n-grams in the essay with n. 3.1.2.1 n-gram Basics An n-gram is a subsequence of n items from a given sequence. The items in question can be phonemes, syllables, letters, words or base pairs according to the application. An n-gram of size 1 is referred to as a "unigram"; size 2 is a "bigram" (or, less commonly, a "digram"); size 3 is a "trigram"; and size 4 or more is simply called an "n-gram". An n-gram model is a type of probabilistic model for predicting the next item in such a sequence. n-gram models are used in various areas of statistical natural language processing and genetic sequence analysis. In this thesis we have used word n-grams for indexing. According to words n-gram the sentence “Birds fly on the sky” makes the following n-grams: Unigrams : “Birds”, “fly” , “on”, “the”, “sky” Bigrams : “Birds fly”, “fly on”, “on the”, “the sky” Trigram : “Birds fly on”, “fly on the”, “on the sky” Fourgram: “Birds fly on the sky” 3.1.2.2 Selecting the n-gram Index Terms n-gram index terms have been selected for making the n-gram by documents matrix. The n-gram index terms have</s> |
<s>been selected automatically from the pregraded training essays and course materials. The n-grams which are present in at least two essays have been selected automatically as index terms. 3.1.2.3 Weighting of n-grams by Document Matrix Each cell of the n-grams by documents matrix has been filled by the multiplication of frequency of n-grams by n. The weight increased by 1 if indexed unigram matched in the essay, weight increased by 2 if bigram matched, weight increased by n if n-gram matched in the essay. 3.1.3 Compute the SVD of n-gram by Document Matrix In linear algebra, the singular value decomposition (SVD) is an important factorization of a rectangular real or complex matrix, with many applications in signal processing and information retrieval [17]. In the analysis part of this chapter Algorithm I represents the SVD of n-gram by document matrix. SVD factorizes a matrix into three matrices. Applications which employ the SVD include computing the pseudo inverse, least squares fitting of data, matrix approximation, and determining the rank, range and null space of a matrix. The n-gram by document matrix has been decomposed using SVD of matrix. Using SVD the n-gram by document matrix At×d has been decomposed as follows: ndnnntdt VSUA ×××× ××= (13) where, A is a dt × word by documents matrix U is a nt × orthogonal matrix S is a nn × diagonal matrix V is a nd × orthogonal matrix Fig. 3.3 illustrates the SVD of n-gram by documents matrix. The matrix At×d has been decomposed as the product of three smaller matrices of a particular form. The first of these matrices has the same number of rows as the original matrix, but has fewer columns i.e. the first matrix is made from n-grams by singular values. The third matrix has the same number of columns as the original, but has only n rows, also linearly independent i.e. the third matrix is made from singular value by documents. In the middle is a diagonal n × n matrix of what are known as singular values. Its purpose is to scale the factors in the other two matrices such that when the three are multiplied, the original matrix is perfectly recomposed. Fig. 3.3: The SVD of matrix The columns of U are orthogonal eigenvectors of AAT, the columns of V are orthogonal eigenvectors of ATA, and S is a diagonal matrix containing the square roots of eigenvalues from U or V in descending order. 3.1.4 Dimensionality Reduction of the SVD Matrices The dimension of SVD matrices has been reduced. The purpose of the dimensionality reduction step is to reduce the noise and unimportant details in the data so that the underlying semantic structure can be used to compare the content of essays [18], [19]. Algorithm II represents the dimension reduction of SVD matrices. The dimensionality reduction operation has been done by removing one or more smallest singular values from singular matrix S and also deleted the same number of columns and rows from U and V,</s> |
<s>respectively as in Fig. 3.4. Fig. 3.4: The truncation of SVD matrices The selection smallest value from S is ad hoc heuristic [19]. In Fig. 3.4 we see that the new product, Ak, still has t rows and d columns as Fig. 3.3, but is only approximately equal to the original matrix A. kkktxd VSUAA ××=≈ (14) 3.1.5 Human Grading of Training Essays Each training essay is graded by more than one human grader. The average grade point of human grades is the grade point assigned to the corresponding training essay. This grade point has been treated as training essay score. The training essays along with the grades are stored in the database for automated essay evaluation. 3.1.6 Essay Set Generation The truncated SVD matrices have been used for making the training essay vectors. Training essay vectors have been created from the truncated SVD matrices as follows: For each document vector kkjjj SUddd 1, −××=′ (15) The document vectors jd ′ along with human grades of training essays have made the training essay set. 3.2 The Evaluation of Submitted Essay Fig. 3.5 shows the evaluation part where the submitted essays have been graded automatically by the system. Fig. 3.5: ABESS Evaluation of submitted essay 3.2.1 Grammatical Errors Checking The system checks the submitted essay for lingual errors. This checking is a part of evaluation. The system used n-gram based statistical grammar checker [23]. At first the system used parts of speech (POS) tagging. Then use a trigram model (which looks two previous tags) to determine the probability of the tag sequence and finally make the decision of grammatical correctness based on the probability of the tag sequence. For our POS tagging, we used the implementation of Brill’s tagger [24]. 3.2.2 Preprocessing of Submitted Essay The student essays have been preprocessed first as in the training essay set generation. At first the pregraded essays have been checked for lingual errors. Some percentage of positive Stop-word Removal Word Stemming Query Vector Generation Compute Similarity Score GLSA Score Human Score of Submitted Essay Compare GLSA Score with Human Score Reliability MeasureSubmitted Bangla Essay Training Essay Set Grammatical Error Checking or negative marking has been on the basis of lingual error checking. Stopwords have been removed from the essays and the words have been stemmed to their roots. 3.2.3 Query Vector Creation At first query matrix (q) has been formed for the submitted essay according to the rules of making n-gram by documents matrix. Fig. 3.6 shows the creation of query matrix. Fig. 3.6: Query matrix (q) Query vector has been created from the submitted essay according to the following equation: Query vector 1)( −××=′ kkT SUqq (16) where, Tq is the transpose of query matrix kU is the left truncated orthogonal matrix and kS is the inverse of truncated singular matrix of SVD 3.2.4 Assigning Grades to the Submitted Essays using Cosine Similarity Training essay vector jd ′ has been calculated for each jth essay and query vector q′ has been calculated for</s> |
<s>the submitted essay. We have used cosine similarity for finding the similarity between query vector q′ and the each essay vector jd ′ . Cosine similarity is a measure of similarity between two vectors by measuring the cosine of the angle between them. The cosine similarity value between two vectors ranges from −1 meaning exactly opposite, to 1 meaning exactly the same, with 0 usually indicating independence, and in-between values indicating intermediate similarity or dissimilarity. Fig. 3.7 shows the angle between two essay vectors 1d ′ and 2d ′ with query vector q′ . The angle denotes the angle between 1d ′ and ,q′ β denotes the angle between 2d ′ and q′ . Fig. 3.7 shows that document Fig. 3.7: Angle between document vector and query vector vector 1d ′ is closer to the query vector q′ than 2d ′ . Cosine similarity between query vector q′ and the each essay vector jd ′ has been calculated by the following equation ijqj)d()w(cos)d,q(Sim==′′ θ (17) where, ),( jdqSim ′′ = similarity between query vector q′ and jth document vector jd ′ ijd = weight of n-gram Nj in essay vector jd ′ ijw = weight of n-gram Nj in query vector q′ The highest cosine similarity value between the query vector and the training essay vector has been used for grading the submitted essay. The grade point of submitted essay has been assigned by the grade point of training essay which made maximum similarity. This grade point has been treated as SBESS score. Other similarity measures such as Pearson's correlation, Dice's coefficient etc. cannot calculate the angle between vectors. Sine similarity has not used here, because Sine angle between two vectors makes 0 (lower value) when the vectors are identical and makes 1 (highest value) when two vectors are different. )d,d(d 22122′ )w,w(q 2q1q′)d,d(d 21111′Submitted Bangla Essay Human Grade-1 Human Grade-2 Average Grade ABESS Score Compare Performance Result 3.3 The Evaluation of ABESS Fig. 3.8 shows the evaluation of ABESS. The submitted essays have been graded by more two human graders. The average value human grades have been treated as human grade of submitted essay. ABBES has generated an automatic grade for the submitted essay which has Fig. 3.8: The evaluation of ABESS been treated as ABESS score. The reliability of our system has been measured by comparing the average human score with ABESS score. If the ABESS score is very close to human score then the system is treated as reliable system. 3.4 Analysis of AEG with GLSA The preprocessing has been done by removing stopwords and word stemming. Stopwords have been removed from the Bangla essay and words have been stemmed to their roots. The preprocessing steps increase the performance of our AEG system. The n-gram by document matrix has been created by using the frequency of n grams in a document. For each cell n-gram by document matrix has been filled by aij = tfij × n. The n-gram by documents matrix has been decomposed by SVD</s> |
<s>of matrix. The SVD of matrix has been done by using the Algorithm I. Algorithm I Creation of SVD Matrices Input: Matrix A of order m×n Output: The nppppm V,S,U ××× Matrices such that, nppppmnm VSUA ×××× ××= Step 1: Multiplicate A by the transpose of A and put it to TStep 2: Compute λ 1, λ2, λ 3, . . . , λ n the eigenvalues of T Step 3: FOR i = 1 to n DO µi = sqrt(λ i) ENDFOR Step 4: Sort µ1, µ2, . . . , µn in descending order Step 5: Initialize S FOR i = 1 to m DO FOR j = 1 to n DO IF (i = j) THEN Set Sij = µi ELSE Set Sij = 0 ENDIF ENDFOR ENDFOR Step 6: FOR i=1 to n DO ui = eigenvector of λi ENDFOR Step 7: Create a matrix npV × having the ui as columns Step 8: npTV × = the transpose of npV × Step 9: Calculate 1ppnpnmpm SVAU −×××× ××= In the Algorithm I the time complexity of step 1 for multiplication of nmA × with the transpose of nmA × is O(mn2). The time complexity of step 2 for calculating eigenvalues of nmA × is O(mn2). The time complexity of step 3 is O(n). The complexity of step 4 for sorting n numbers is O(n log n). The complexity of step 5 is O(mn). The complexity of step 6 for calculating eigenvectors is O(mn2). The complexity of step 7 is O(mn). The complexity of step 8 is O(mn). The complexity of step 9 is O(mn2). The total complexity of Algorithm I )mnnmmnnmmnnlognnnmO(mn 2222 ++++++++= )nlognnmn3O(4mn 2 +++= )O(mn 2≈ The complexity of SVD algorithm is )O(mn 2 for a matrix of order m×n. The dimension of SVD matrices have reduced. The purpose of the dimensionality reduction is to reduce the noise and unimportant details in the data so that the underlying semantic structure can be used to compare the content of essays. The SVD matrices Ut×n , Sn×n and d×n have been truncated by removing one or more smallest singular values from singular matrix S and also deleted the same number of columns and rows from U and V, respectively. We have removed singular values less than 0.50 from Sn×n. The selection of 0.50 is an ad hoc heuristic [19]. The dimension reduction of SVD matrices is shown by Algorithm II. Algorithm II Dimension Reduction of SVD Matrices Input: nppppm V,S,U ××× matrices Output: Uk, Sk and VkStep 1: Set k to 0 Step 2: FOR i = 0 to p-1 DO IF (Si,i < 0.5) THEN k = i - 1 ENDIF Increment i ENDFOR Step 3: Sk = The submatrix of Sp×p of order k × k Step 4: Uk = The submatrix of Um×p of order m × k Step 5: VkT = The submatrix of VTp×n of order k × p In the Algorithm II the complexity of step 1 is O(1).</s> |
<s>The complexity of Algorithm step 2 is O(n). The complexity of step 3 is O(k2). The complexity of step 4 is O(mk). The complexity of step 5 is O(pk). The total complexity for Algorithm II is pk)mkkp(1O 2 ++++= pk)(mkO +≈The complexity of Algorithm II for reduction of SVD matrices is pk)(mkO + where m is the number of n-gram, p is the number of training essay and k is the reduction size. The essay vectors have been created from the training essays. Each essay generates an essay vector. In the algorithm of training essay vector creation the training essays are preprocessed first and then n-gram index terms have been selected from the training essays. The n-gram by document matrix has been created using the n-gram index terms. The training essay vectors have been created from the n-gram by document matrix. The essay vector creation algorithm is shown by Algorithm III. Algorithm III Training Essay Vector Creation Input : Set of training essays, E = {E1, E2, ……, Ep} Output: Set of essay vectors, D = {D1, D2, ……., Dp} Step 1 : FOR i = 1 to p DO a. Remove stop-words from essay Ei b. Stem words of essay Ei to their root ENDFOR Step 2 : Select n-garms for the index terms of n-garm by document matrix. Step 3 : Build a n-gram by document matrix, Am×p where each matrix cell, aij, is the number of times n-gram Ni appears in the document dj multiplied by n, i.e. aij= tfij×n Step 4 : Decompose Am×p matrices using SVD of matrix, such that, Am×p = Um×r × Sr×r × Vr×pStep 5 : Truncate the U, S and VT and make Ak×k= Um×k × Sk×k × Vk×p Step 6 : FOR j =1 to p DO Make the essay vector, Dj = DjT × Um×k × Sk×k ENDFOR In the Algorithm III the complexity of step 1 is )SO(, here iS denotes the size of ith essay. The complexity of step 2 is )SO(n. The complexity of step 3 is )SO(mpThe complexity of step 4 is )O(mp 2 . The complexity of step 5 is )O(mp 2 . The complexity of step 6 is O(mk2). The total complexity of Algorithm III is = )mk2mp Smp)nO((1 22i ++++ ∑).2mp Smp)nO((1 2i +++≈ ∑In the evaluation part of ABESS the query matrix (q) has been formed for the submitted essay according to rules of making n-gram by documents matrix. Query vector (Q) has been created from the submitted essay by using the Algorithm IV. Algorithm IV Query Vector Creation Input : A Submitted essay for grading, Eq Output: Query vector, Q Step 1 : Preprocess the submitted essays a. Remove stop-words from essay Eq b. Stem words of essay Eq to their roots Step 2 : Build a one dimensional query matrix qmx1 same as the rule of creating n-gram by document matrix Step 3 : Make the query vector Q = qm×1T × Um×k × Sk×kIn the Algorithm IV the</s> |
<s>complexity of step 1 is )O(S q , where qS denotes the size of submitted essay. The complexity of step 2 is )O(m . The complexity of step 3 is )O(mk 2 . The total complexity of Algorithm IV is = )mkm O(S 2q ++ )mkO(S 2q +≈ The complexity of Algorithm IV is )mkO(S 2q + . The similarity between query vector and document vectors has been used for grading the submitted essay. Cosine similarity has been used for finding the similarity between vectors. The following algorithm shows the evaluation of submitted essay. Algorithm V Evaluation of Submitted Essay Input: Query vector of submitted essay Q ′ and a set of essay vectors, D = {D1, D2, . . . , Dp} Output: Grade G, calculated by ABESS for submitted essay Step 1: Compute the cosine similarity between Q ′ and the each essay vector Di Step 2: Find the maximum cosine similarity value, M Step 3: Assign the grade point (G) of the training essay to the submitted essay which makes maximum M In the Algorithm V the complexity of step 1 is O(p) . The complexity of step 2 is )O(p . The complexity of step 3 is )O(1 . The total complexity of Algorithm V is )1pO(p ++= )1O(2p += )O(p≈ The complexity of Algorithm V is )O(p . In the evaluation phase of ABESS the grades of submitted essays have been compared with the human grades for reliability measure. For comparison, we have computed the mean of errors by averaging the magnitude each machine score deviated from its corresponding human score. In addition, we also computed the standard deviation of errors by use the equation (18) and (19) respectively. (18)== 1xxxxxxSD n1 )(..........)()( −++−+−= (19) where, x is the arithmetic mean from all errors ix is an absolute value of an error between human score and machine score computed by ix = ii reMachineScoHumanScore − ni ....1= n is the number of data set, we have tested our system for n = 40, 80 and 3.5 An illustrative example We have selected 10 answers (treated as 10 essays) with marks for the question “‘ধমূেকতু’ পিtকার পিরচয় দাo” for generating training essay set for the system. Table 3.1 shows the training essays with their corresponding human grades. Table 3.1: Training answers with corresponding grades Essay No. Training Essay Human Grade 1922 ি sােbর 12i আগs িবেdাহী কিব কাজী নজরলু iসলাম aধর্-সpািহক ‘ধমূেকতু’ পিtকা pকাশ কেরন। 4 2. ‘ধমূেকতু’ 1922 সােল কাজী নজরলু iসলাম কতৃর্ক pকািশত পিtকা । 3 3. ‘ধমূেকতু’ কাজী নজরলু iসলােমর পিtকা । 2 4. ‘ধমূেকতু’ 1922 সােল pকািশত পিtকা । 2 5. ‘ধমূেকতু’ রবীndনাথ কতৃর্ক pকািশত মািসক পিtকা । 0 6. ‘ধমূেকতু’ কিবগরু ুরবীndনাথ ঠাkর pকািশত ৈদিনক পিtকা । 0 7. ‘ধমূেকতু’ কাজী নজরলু iসলাম pকািশত aধর্-সpািহক পিtকা । 3 8. ‘ধমূেকতু’ কাজী নজরলু iসলােমর মািসক পিtকা । 1 9. ‘ধমূেকতু’ রবীndনাথ pকািশত পিtকা । 0 10. ‘ধমূেকতু’ নজরেুলর ৈদিনক পিtকা । 1 After stemming and stopwords removing the above answers converted to</s> |
<s>followings: 1. 1922 ি sাb 12i আগs িবেdাহী কিব কাজী নজরলু iসলাম aধর্-সpািহক ‘ধমূেকতু’ পিtকা pকাশ কেরন। After stemming and stopword removing we have selected n-gram index terms from the essays 1 to 10. We have considered word n-grams. We have selected n-grams as index terms that have been presented at least two essays. The selected n-grams from the essays 1 to 10 have been shown in Table 3.2. Table 3.2: List of selected n-grams for indexing n-grams Index terms Unigrams ‘ধমূেকতু’ , 1922, সাল, কাজী, নজরলু, iসলাম, কতৃর্ক, রবীndনাথ, pকািশত, aধর্-সpািহক, ৈদিনক, মািসক, পিtকা Bigrams ‘ধমূেকতু’ 1922, ‘ধমূেকতু’ কাজী, 1922 ি sাb/সাল, কাজী নজরলু, নজরলু iসলাম, pকািশত পিtকা, ৈদিনক পিtকা, মািসক পিtকা Trigrams ‘ধমূেকতু’ 1922 ি sাb/সাল, ‘ধমূেকতু’ কাজী নজরলু, কাজী নজরলু iসলাম We have made an n-gram by document matrix by using the weighting scheme as shown in Table 3.3. Table 3.3: Weighting scheme Item Weight Unigram Matching Increment by 1 Bigram Matching Increment by 2 Trigram Matching Increment by 3 -------- -------- N-gram matching Increment by N 2. ‘ধমূেকতু’ 1922 সাল কাজী নজরলু iসলাম কতৃর্ক pকািশত পিtকা । 3. ‘ধমূেকতু’ কাজী নজরলু iসলাম পিtকা । 4. ‘ধমূেকতু’ 1922 সাল pকািশত পিtকা । 5. ‘ধমূেকতু’ রবীndনাথ কতৃর্ক pকািশত মািসক পিtকা । 6. ‘ধমূেকতু’ কিবগরু ুরবীndনাথ ঠাkর pকািশত ৈদিনক পিtকা । 7. ‘ধমূেকতু’ কাজী নজরলু iসলাম pকািশত aধর্-সpািহক পিtকা । 8. ‘ধমূেকতু’ কাজী নজরলু iসলাম মািসক পিtকা । 9. ‘ধমূেকতু’ রবীndনাথ pকািশত পিtকা । 10. ‘ধমূেকতু’ নজরলু ৈদিনক পিtকা । 3.5.1 n–gram by Document Matrix Creation We have created an n-gram by document matrix from the training essay set. Each row of n-gram by document matrix has been assigned by n-grams whereas each column has been presented by a training essay. An unigram and its related n-grams and synonyms of unigram are grouped for making index term for a row. Each cell of the matrix has been filled by the weight as shown in Table 3.3. The n–gram by document matrix has been shown in Table 3.4. Table 3.4: n–gram by document matrix creation Trigram Essays Bigram 1E 2E 3E 4E 5E 6E 7E 8E 10E Unigram ‘ধমূেকতু’ 1922, কাজী ি sাb / সাল, নজরলু 1 1+21+21+2+3 1 1 1+21+2+3 1 1 1922 ি sাb / সাল ---- 1+2 1+2 0 1+2 0 0 0 0 0 0 সাল ---- ---- 1 1 0 1 0 0 0 0 0 0 কাজী নজরলু iসলাম 1+21+21+2+3 0 0 0 1+21+2+3 0 0 নজরলু iসলাম ----- 1+2 1+2 1+2 0 0 0 1+2 1+2 0 1 iসলাম ----- ----- 1 1 1 0 0 0 1 1 0 0 কতৃর্ক ----- ----- 0 1 0 0 1 0 0 0 0 0 রবীndনাথ ----- ----- 0 0 0 0 1 1 0 0 1 0 pকািশত পিtকা ----- 0 1+2 0 1+2 1 1 1 0 1+2 0 aধর্-সpািহক ----- ----- 1 0 0 0 0 0 1 0 0 0 ৈদিনক পিtকা ----- 0 0 0 0 0 1+2 0 0 0 1+2 মািসক পিtকা ----- 0 0 0 0</s> |
<s>1+2 0 0 1+2 0 0 পিtকা ----- ----- 1 1 1 1 1 1 1 1 1 1 The n-gram by document matrix from Table 3.4 is converted to matrix A which is presented below. 1.00 6.00 6.00 6.00 1.00 1.00 6.00 6.00 1.00 1.00 3.00 3.00 0.00 3.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 6.00 6.00 6.00 0.00 0.00 0.00 6.00 6.00 0.00 0.00 3.00 3.00 3.00 0.00 0.00 0.00 3.00 3.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 1.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 0.00 0.00 1.00 0.00 0.00 3.00 0.00 3.00 1.00 1.00 1.00 0.00 3.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.00 0.00 0.00 0.00 3.00 0.00 0.00 0.00 0.00 3.00 0.00 0.00 3.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 We have calculated SVD of A using Algorithm I. The n-gram by document matrix A has been decomposed into three matrices U, S and VT. Here, U is an orthogonal matrix, S is a singular matrix containing singular values in descending order and VT is transpose of an orthogonal matrix V. The SVD matrices U, S and VT are presented below. -0.64 0.46 0.32 -0.25 0.37 0.21 0.04 -0.02 -0.13 0.00 -0.15 0.21 -0.68 0.07 -0.24 0.53 0.05 -0.12 -0.02 -0.02 -0.05 0.07 -0.23 0.02 -0.08 0.18 0.02 -0.04 -0.01 -0.01 -0.63 -0.54 -0.14 0.07 -0.11 -0.22 -0.06 -0.29 -0.06 0.20 -0.32 -0.26 -0.03 0.18 -0.02 -0.02 -0.12 0.52 0.33 -0.32 -0.11 -0.09 -0.02 0.01 -0.02 -0.04 -0.01 -0.05 -0.01 0.03 -0.03 0.05 -0.01 -0.01 -0.22 -0.03 -0.60 0.32 -0.68 -0.13 -0.01 0.10 0.09 0.15 -0.24 -0.30 0.21 -0.48 -0.38 -0.20 -0.15 0.56 -0.24 0.03 -0.23 -0.61 -0.13 0.13 0.29 0.25 -0.04 -0.07 -0.08 0.06 -0.02 -0.06 0.62 0.51 -0.40 0.42 -0.02 0.14 0.23 0.88 0.10 0.19 -0.15 -0.06 0.01 0.28 -0.08 0.01 0.48 -0.15 -0.76 0.28 0.03 0.04 0.18 0.20 -0.13 0.13 0.07 0.26 -0.19 -0.08 0.39 0.09 -0.01 -0.67 A = U = 20.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.39 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.22 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.76 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.63 -0.31 -0.49 -0.44 -0.24 -0.06 -0.05 -0.45 -0.45 -0.06 -0.06 -0.40 0.17 -0.17 0.73 0.19 0.23 -0.10 -0.16 0.33 0.11 -0.57 -0.38 0.21 -0.20 0.33 0.18 0.14 0.49 -0.05 0.20 0.29 0.01 -0.06 -0.21 -0.06 0.65 -0.04 -0.16 0.06</s> |
<s>0.65 -0.35 -0.10 0.35 0.14 -0.73 0.01 0.28 -0.25 -0.19 0.13 0.16 -0.11 -0.07 0.42 0.02 -0.08 -0.32 0.24 -0.74 0.25 0.41 -0.73 -0.06 0.34 0.00 0.05 0.35 0.02 0.21 -0.11 -0.06 0.06 -0.33 -0.04 0.19 -0.55 0.47 -0.18 -0.03 0.54 0.02 -0.04 -0.18 0.01 -0.48 -0.27 -0.33 0.52 0.46 0.27 -0.11 0.16 -0.68 -0.05 -0.24 0.32 0.37 0.28 -0.21 -0.27 3.5.2 Truncation of SVD Matrices We have truncated SVD matrices. The purpose of the truncation is to reduce the noise and unimportant details in the data so that the underlying semantic structure can be used to compare the content of essays. We have removed the diagonal values of singular matrix S that are less than 1 and also removed corresponding rows and columns of S. Same numbers of columns and rows have been removed from U and VT respectively. The removal of singular values that are less than 1 from S is an ad hoc heuristic [19]. We have selected the value 1 for this example only. The value may be different for other problem domain. Truncated U, S and VT matrices have been denoted as Uk, Sk and VT matrices respectively. The Uk, Sk and VTmatrices are presented below. S = VT = -0.64 0.46 0.32 -0.25 0.37 0.21 0.04 -0.15 0.21 -0.68 0.07 -0.24 0.53 0.05 -0.05 0.07 -0.23 0.02 -0.08 0.18 0.02 -0.63 -0.54 -0.14 0.07 -0.11 -0.22 -0.06 -0.32 -0.26 -0.03 0.18 -0.02 -0.02 -0.12 -0.11 -0.09 -0.02 0.01 -0.02 -0.04 -0.01 -0.03 0.05 -0.01 -0.01 -0.22 -0.03 -0.60 -0.01 0.10 0.09 0.15 -0.24 -0.30 0.21 -0.15 0.56 -0.24 0.03 -0.23 -0.61 -0.13 -0.04 -0.07 -0.08 0.06 -0.02 -0.06 0.62 -0.02 0.14 0.23 0.88 0.10 0.19 -0.15 -0.08 0.01 0.48 -0.15 -0.76 0.28 0.03 -0.13 0.13 0.07 0.26 -0.19 -0.08 0.39 20.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.39 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.22 0.31 -0.49 -0.44 -0.24 -0.06 -0.05 -0.45 -0.45 -0.06 -0.06 -0.40 0.17 -0.17 0.73 0.19 0.23 -0.10 -0.16 0.33 0.11 -0.57 -0.38 0.21 -0.20 0.33 0.18 0.14 0.49 -0.05 0.20 0.29 0.01 -0.06 -0.21 -0.06 0.65 -0.04 -0.16 0.06 0.65 -0.35 -0.10 0.35 0.14 -0.73 0.01 0.28 -0.25 -0.19 0.13 0.16 -0.11 -0.07 0.42 0.02 -0.08 -0.32 0.24 -0.74 0.25 0.41 -0.73 -0.06 0.34 0.00 0.05 0.35 0.02 0.21 -0.11 We have calculated document matrix for each training essay. The creation of document matrix for essay 1E “1922 ি sাb 12i আগs িবেdাহী কিব কাজী নজরলু iসলাম aধর্-সpািহক ‘ধমূেকতু’ পিtকা pকাশ কেরন” has been shown in the Table 3.5. Here each row represents n-gram index terms and column represents the essay. Each cell represents the weight of n-grams. The weights have been calculated according to the weighting scheme shown in Table 3.3. Uk = Sk = k = Table 3.5: Creation of document matrix for</s> |
<s>essay 1E Trigram 1E Bigram Unigram ‘ধমূেকতু’ 1922, কাজী ি sাb / সাল, নজরলু 1922 ি sাb / সাল ----- 1+2 সাল ---- ----- 1 কাজী নজরলু iসলাম 1+2+3 নজরলু iসলাম ----- 1+2 iসলাম ----- ----- 1 কতৃর্ক ----- ----- 0 রবীndনাথ ----- ----- 0 pকািশত পিtকা ----- 0 aধর্-সpািহক ----- ----- 1 ৈদিনক পিtকা ----- 0 মািসক পিtকা ----- 0 পিtকা ----- ----- 1 The document matrix from Table 3.5 is converted to matrix 1d which is presented below. 1.00 3.00 1.00 6.00 3.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 d1 = Transpose of matrix 1d has been calculated. The transpose of 1d is denoted by 1Td . The transpose of 1d is Td = 1.00 3.00 1.00 6.00 3.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 1.00 Document vectors have been calculated for each essay using equation (15). Document vector for essay 1E is T SUdd 1−××=′ =′1d -0.31 -0.40 -0.58 0.29 -0.36 0.16 0.40 Similarly we have calculated document vectors for essays 102 EE − which are denoted as102 dd ′−′ . The document vectors 102 dd ′−′ are presented below. =′2d -0.49 0.17 -0.39 0.01 -0.11 -0.12 -0.75 =′3d -0.44 -0.17 0.21 -0.06 0.34 -0.09 -0.08 =′4d -0.25 0.74 -0.20 -0.21 0.14 0.41 0.34 =′5d -0.06 0.19 0.33 -0.06 -0.73 0.01 0.00 =′6d -0.05 0.23 0.19 0.65 0.00 -0.08 0.05 =′7d -0.45 -0.10 0.14 -0.04 0.27 -0.34 0.30 =′8d -0.45 -0.17 0.50 -0.17 -0.26 0.22 -0.01 =′9d -0.06 0.33 -0.05 0.06 -0.20 -0.74 0.21 =′10d -0.06 0.11 0.21 0.65 0.12 0.25 -0.11 3.5.3 Evaluation of submitted answer We have selected a submitted answer “‘ধমূেকতু’ কাজী নজরলু iসলাম কতৃর্ক pকািশত পিtকা” for the question “‘ধমূেকতু’ পিtকার পিরচয় দাo”. Query matrix )(Q has been calculated for the submitted answer. Table 3.6 shows the query matrix for the submitted answer. Here each row represents n-gram index terms and column represents the submitted answer. Each cell represents the weight of n-grams. The weights have been calculated according to the rule of Table 3.3. Table 3.6: Query matrix for submitted answer Trigram Q Bigram Unigram ‘ধমূেকতু’ 1922, কাজী ি sাb / সাল, নজরলু 1 1922 ি sাb / সাল ---- 1+2 সাল ---- ---- 1 কাজী নজরলু iসলাম 1+2+3 নজরলু iসলাম --- 1+2 iসলাম ----- ------- 1 কতৃর্ক ------- ------ 0 রবীndনাথ ------- ------- 0 pকািশত পিtকা ------- 0 aধর্-সpািহক ------ ----- 1 ৈদিনক পিtকা ------ 0 মািসক পিtকা ------ 0 পিtকা ------ ----- 1 The query matrix from Table 3.6 is converted to matrix q . We have calculated the transpose of the query matrix q . The transpose of q is presented by Tq . The transpose of q is =Tq 6.00 0.00 0.00 6.00 3.00 1.00 1.00 0.00 3.00 0.00 0.00 0.00 1.00 Using equation (16) we have calculated query vector. The query vector for the submitted answer )(Q has been denoted by q′ . The query vector for )(Q is =′q 0.47 0.07 0.06 -0.04 0.10 -0.78 -0.89 We have calculated cosine similarity between the query</s> |
<s>vector and each document. Cosine similarity has been calculated because we have calculated essay vectors from training essays and query vector from submitted essay. For calculating similarity other similarity measures cannot calculate the angle between vectors. Table 3.7 shows the cosine similarity between each document vector and query vector. Table 3.7: Cosine similarity between document vector and query vector Document Vector Query VectorCosine Similarity Between Document Vector and Query vector 1d ′ q′ -0.159175941709 2d ′ q′ 0.632742337711 3d ′ q′ 0.316550798563 4d ′ q′ -0.127460924837 5d ′ q′ 0.0527087463909 6d ′ q′ 0.131898981331 7d ′ q′ -0.118214851475 8d ′ q′ 0.238161094242 9d ′ q′ -0.191028402639 10d ′ q′ -0.0465719032226 From Table 3.7 we see that the query vector has made maximum similarity with document vector of 2E . So, the grade point 3.00 of 2E has been assigned for submitted answer. Chapter 4 Simulation The objective of this chapter is to verify the accuracy and reliability of ABESS as compared to human grader. The experimental evaluation has been performed with student submitted essays of Bangla Language and synthetically generated essays. The experimental result has been compared with existing AEG systems. 4.1 Experimental Environment ABESS has been tested on a machine (treated as server) with 2.10GHz Intel Core 2 Duo processor and 2GB of RAM, running on Microsoft Windows Server 2003 with Apache server. We have developed a client-server online system for grading essays. The system has been developed in Microsoft Visual Studio 2008 environment. We have used C# (CSharp) for sever side processing and ASP.NET for client side scripting. System administrator has submitted training essays and other related data by using online administrators interface in web browser. Students submitted essays online and received instant result for their essays. For storing and retrieving data we have used MySQL database. 4.2 Dataset Used for Testing ABESS We have tested our system by two ways; based on model essays and based on student submitted essays. At first, we have trained our system by 100 essays which were the different synthetic combinations of model essays. We added the grades of the essays. The theme of the essay was “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram). We have tested our model by synthetically generated 40 essays. The synthetic essays have been graded by average marks given by two school teachers. Secondly, we have tested our system by student submitted essays and narrative answers of Bangla literature. We have trained ABESS by 100 high school students’ submitted essays with corresponding human grades. The theme of the essay was “কািরগির িশkা” (Karigori Shikkha). Then we have tested ABESS by 80 student submitted essays. Finally, we have tested our system by 20 narrative answers of students’ submitted scripts of Bangla literature. Table 4.1 shows the datasets. Table 4.1: The students’ submitted data set Set no. Topic No. of words Type/Level Training Essay Test Essaysবাংলােদেশর sাধীনতা সংgাম (Bangladesher Shadhinota Songram) 1000 Synthetic 100 40 কািরগির িশkা (Karigori Shikkha) 2000 SSC 100 80 1িট রচনামূলক p (Ekti Rochonamulok Prosna) 400</s> |
<s>SSC 80 20 4.3 Evaluation Methodology Both the trained essays and submitted essay have been graded by human grader first. The final mark for an essay was the average of the marks given by two teachers. The grade point of each essay ranged from 2.0 to 4.0, where a higher point represented a higher quality. Table 4.2 shows the summary of grading system Table 4.2: Grade point according to obtained marks Obtained Marks (%) Grade point 80 – 100 4.00 70 – 79 3.50 60 – 69 3.00 50 – 59 2.50 40 – 49 2.00 less than 40 0.00 4.4 Simulation Results The performance of a method for scoring essays can be evaluated by on indicator namely accuracy, i.e. how much the automated grade closer to the human grade. If the ABESS grade is more close to human grade then it is more accurate with human grader. To evaluate the essays “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram), “কািরগির িশkা” (Karigori Shikkha) and a narrative answer of SSC level ABESS was trained by 100, 100 and 80 model answers respectively. Human grader graded these answers. We have used additional 40, 80 essays based on above topics and 20 narrative answers on Bangla literature that were graded by human grader to test the performance of ABESS. Finally we have considered the numerical grades as shown in Table 4.1. Secondly, we have converted the numerical marks into grade point. Detailed results for “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram), “কািরগির িশkা” (Karigori Shikkha) and for a narrative answer are shown in the Tables 4.3−4.5. The shaded rows represented that the ABESS grade point is different than the teacher grade point. Table 4.3: Difference between teacher grade and ABESS grade for “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) Essay No. Teacher grade ABESS grade Difference (teacher-ABESS) 1 4.00 3.50 0.50 2 3.50 3.50 0.00 3 0.00 0.00 0.00 4 2.00 2.00 0.00 5 4.00 4.00 0.00 6 4.00 4.00 0.00 7 0.00 0.00 0.00 8 4.00 4.00 0.00 9 2.50 2.50 0.00 10 2.50 2.50 0.00 11 2.50 2.50 0.00 12 3.50 3.50 0.00 13 3.50 3.50 0.00 14 4.00 4.00 0.00 15 4.00 4.00 0.00 16 0.00 0.00 0.00 17 3.00 3.50 -0.50 18 3.00 3.00 0.00 19 4.00 4.00 0.00 20 4.00 4.00 0.00 21 3.50 3.50 0.00 22 3.50 3.50 0.00 23 3.50 3.50 0.00 24 4.00 4.00 0.00 25 2.00 2.00 0.00 26 4.00 4.00 0.00 27 2.50 2.50 0.00 Essay No. Teacher grade ABESS grade Difference (teacher-ABESS) 28 2.50 2.50 0.00 29 2.00 2.00 0.00 30 3.50 3.50 0.00 31 3.50 3.50 0.00 32 3.00 3.00 0.00 33 2.50 2.50 0.00 34 3.00 3.00 0.00 35 3.00 3.00 0.00 36 3.00 3.00 0.00 37 3.50 3.50 0.00 38 3.50 3.50 0.00 39 2.50 2.50 0.00 40 2.50 2.50 0.00 From the Table 4.3 we have seen that essay no. 1 and essay no. 17 has been missed by ABESS, the system has given different grades from human grades. But all</s> |
<s>other essays have been graded successfully. For test essay “বাংলােদেশর sাধীনতা সংgাম” ABESS has achieved 95% accuracy. Table 4.4: Comparison of human grade and ABESS grade for essay “কািরগির িশkা” (Karigori Shikkha) Essay No. Teacher grade ABESS grade Difference (teacher-ABESS) 1 4.00 4.00 0.00 2 3.00 3.00 0.00 3 3.50 3.50 0.00 4 3.50 3.50 0.00 5 3.00 3.00 0.00 6 3.00 3.00 0.00 7 4.00 4.00 0.00 8 4.00 4.00 0.00 9 4.00 3.50 0.50 10 4.00 4.00 0.00 11 4.00 4.00 0.00 12 4.00 4.00 0.00 13 4.00 4.00 0.00 14 2.50 2.50 0.00 15 2.50 2.50 0.00 16 3.50 3.50 0.00 17 2.50 2.50 0.00 18 2.50 3.00 -0.50 19 4.00 4.00 0.00 20 4.00 4.00 0.00 21 4.00 4.00 0.00 22 2.50 2.50 0.00 23 2.50 2.50 0.00 Essay No. Teacher grade ABESS grade Difference (teacher-ABESS) 24 2.50 2.50 0.00 25 2.50 2.50 0.00 26 3.50 3.50 0.00 27 3.00 3.00 0.00 28 3.50 3.50 0.00 29 3.50 3.50 0.00 30 2.00 2.00 0.00 31 0.00 0.00 0.00 32 2.00 2.00 0.00 33 3.00 3.00 0.00 34 3.50 3.50 0.00 35 0.00 0.00 0.00 36 0.00 0.00 0.00 37 3.00 3.00 0.00 38 3.00 3.00 0.00 39 3.00 3.00 0.00 40 3.00 3.00 0.00 41 3.00 3.00 0.00 42 2.00 2.00 0.00 43 2.00 2.00 0.00 44 3.50 3.50 0.00 45 3.50 3.50 0.00 46 3.00 3.00 0.00 47 3.00 3.00 0.00 48 4.00 4.00 0.00 49 4.00 4.00 0.00 50 4.00 4.00 0.00 51 4.00 4.00 0.00 52 4.00 4.00 0.00 53 4.00 4.00 0.00 54 4.00 4.00 0.00 55 4.00 4.00 0.00 56 4.00 4.00 0.00 57 2.50 2.50 0.00 58 3.50 3.50 0.00 59 3.50 3.50 0.00 60 3.50 3.50 0.00 61 2.50 2.50 0.00 62 2.50 2.50 0.00 63 3.00 3.00 0.00 64 3.00 3.00 0.00 65 3.00 3.00 0.00 66 3.00 3.00 0.00 67 2.50 2.50 0.00 68 2.50 2.50 0.00 69 2.50 2.50 0.00 70 2.50 2.50 0.00 71 2.50 2.50 0.00 Essay No. Teacher grade ABESS grade Difference (teacher-ABESS) 72 3.00 2.00 1.00 73 3.50 3.50 0.00 74 2.00 2.00 0.00 75 2.00 2.00 0.00 76 2.00 2.00 0.00 78 3.50 3.50 0.00 79 3.00 3.00 0.00 80 0.00 0.00 0.00 From the Table 4.4 we have seen that essay no. 9, essay no. 18 and essay no. 72 has been missed by ABESS, the system has given different grades as compared to human grades. But all other essays have been graded successfully. For this dataset ABESS accuracy is 96.25%. Table 4.5: Comparison of human grade and ABESS grade for the narrative answer Essay No. Teacher Grade ABESS Grade Difference (teacher-ABESS)1 4.00 3.00 1.00 2 2.00 3.00 -1.00 3 4.00 4.00 0.00 4 4.00 4.00 0.00 5 3.00 4.00 -1.00 6 3.00 3.00 0.00 7 3.00 3.00 0.00 8 3.00 3.00 0.00 9 4.00 3.00 1.00 10 2.00 2.00 0.00 11 2.00 2.00 0.00 12 3.00 2.00 1.00 13 3.00 3.00 0.00 14 2.00 2.00 0.00 15 3.00 3.00</s> |
<s>0.00 16 3.00 2.00 1.00 17 4.00 4.00 0.00 18 4.00 2.00 2.00 19 4.00 4.00 0.00 20 3.00 4.00 -1.00 From the Table 4.5 we have seen that seven essays have been missed by ABESS, the system has given different grades as compared to human grades. In this case we have found that in the training answer scripts the human grader has given different grades for the same answer and same grades for different answers. Human Score No. of Test Essay ABESS Score 4.00 3.50 3.00 2.50 2.00 0.00 4.00 10 9 1 0 0 0 0 3.50 10 0 10 0 0 0 0 3.00 7 0 1 6 0 0 0 2.50 8 0 0 0 8 0 0 2.00 3 0 0 0 0 3 0 0.00 2 0 0 0 0 0 2 Fig. 4.1: Grade point mapping from human to ABESS for synthetic essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) Human Score No. of Test Essay ABESS Score 4.00 3.50 3.00 2.50 2.00 0.00 4.00 20 19 1 0 0 0 0 3.50 14 0 14 0 0 0 0 3.00 20 0 0 19 0 1 0 2.50 16 0 0 1 15 0 0 2.00 6 0 0 0 0 6 0 0.00 4 0 0 0 0 0 4 Fig 4.2: Mapping of grades from human grades to ABESS for essay “কািরগির িশkা” (Karigori Shikkha) Human Score No. of Test Essay ABESS Score 4.00 3.50 3.00 2.50 2.00 0.00 4.00 7 5 0 1 0 1 0 3.50 0 0 0 0 0 0 0 3.00 9 2 0 5 0 2 0 2.50 0 0 0 0 0 0 0 2.00 4 0 0 1 0 3 0 0.00 0 0 0 0 0 0 0 Fig. 4.3: Mapping of grades from human grades to ABESS for narrative answers of SSC level Bangla literature Figures 4.1−4.3 show the comparisons of ABESS grades with human grades for the essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram), “কািরগির িশkা” (Karigori Shikkha) and a narrative answer of SSC level respectively. In Fig. 4.1 we see that the number of essays having human grade point 4.00 is 10 whereas in ABESS it is 9 for 4.00 and 1 for 3.50. So the error is 10%. Result shows that ABESS evaluation can have upper or lower grades as error. As for example, for 7 essays having human grade point of 3.00, there are 1 having grade point 3.50 and 6 having grade point 3.00. For some grade point there are no errors e.g. grade points 2.00, 2.50, 0.00. In Fig. 4.2 we see that the number of essays having human grade point 4.00 is 20 whereas in ABESS it is 19 for 4.00 and 1 for 3.50. So the error is 5% for grade 4.00. Essays having human grade point 3.00 are 20 whereas in ABESS it is 19 for 3.00 and 1 for 2.00. So the error is 5% for grade 3.00. Essays having human</s> |
<s>grade point 2.50 are 16 whereas in ABESS it is 15 for 2.50 and 1 for 3.00. So the error is 6.25% for grade 2.00. In Fig. 4.3 we see that ABESS incorrectly graded the submitted essay. ABESS made many errors for dataset “narrative answers of SSC level Bangla literature”, because the variation of marks in the SSC level answer scripts i.e. the human graders have given different grade points for the same answer. 4.4.1 Testing ABESS by Using True Positive, False positive, True Negative and False Negative In IR system given a query, a document collection can be divided into 2 parts: those truly relevant and those not. An IR system will retrieve documents it deems relevant, thus dividing the collection into it is relevant and it is irrelevant. The two divisions are often not the same. Therefore we have four counts: Table 4.6: True positive, false positive, true negative and false negative Relevancy Retrieve Truly Relevant Truly Irrelevant Retrieved True Positive (TP) False Positive (FP) Not retrieved False Negative (FN) True Negative (TN) Since we have used IR system for AEG, we have tested our system by these measures. We have calculated true positive, true negative, false positive and false negative from our output using ABESS which are defined as follows: True positive: If a test result shows positive result that is really positive is called true positive. In our experiment if ABESS gives an essay 4.00 grade point for which the human grade point is 4.00 then the result is true positive. True negative: If a test result shows negative result that is really negative is called true negative. In our experiment if ABESS does not give grade point 0.00 where the human grade point 0.00 is not present in the current essay set then it is called true negative. False positive: If a test result shows positive result that is really negative is called false positive. In our experiment if ABESS gives grade point 0.00 for an essay where the human grade point is not 0.00 for that essay, then it is called false positive. False negative: If a test result shows negative result that is really positive is called false positive. In our experiment if ABESS gives grade point 0.00 for an essay where the human grade point is assigned 2.00 for that essays then it is called false negative. Missed: The term missed represents the number of essays for which human grader assigned a particular grade but the ABESS has not assigned the same grade point. Spurious: The term spurious shows the number of essays for which the ABESS assigned a grade but human grader has not given the same grade point. In our experiment if ABESS gives grade point 0.00 for an essay where the human grade point is assigned different grades for that essays then it is called missed by ABESS. Table 4.7: True positive, true negative, false positive and false negative of ABESS for essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram)</s> |
<s>Grade No. of Human Graded Essay No. of Essay Correctly Grade by ABESS Missed Spurious True PositiveTrue Negative False Positive False Negative4.00 10 9 1 0 90% 0% 0% 10% 3.50 10 12 0 2 100% 0% 20% 0% 3.00 7 6 1 0 85.72% 0% 0% 14.29% 2.50 8 8 0 0 100% 0% 0% 0% 2.00 3 4 0 0 100% 0% 0% 0% 0.00 2 2 0 0 100% 0% 0% 0% Table 4.7 shows the results obtained by the ABESS while factoring in relevant or irrelevant result for the query (the submitted essay). In this Table, the first column shows test grades we have assigned to the essays. The second column represents the number of essays that human grader manually assigned to each essay grade. The third column represents the number of essays correctly evaluated by ABESS. The fourth column represents the number of essay to which human grader (and not by the ABESS) assigned each score. The fifth shows the number of texts for which the ABESS (and not human grader) assigned each score. Finally, the last four columns show true positive, false positive, true negative and false negative respectively. From the sixth row we see that 85.72% to 100% of the query (the grade for the submitted essay) is true positive i.e. the ABESS results shows 85.72% to 100% relevant results for the query. So, from the results of Table 4.7 we see that ABESS grades are very close to human grades and there have only little amount of errors. 0.00 2.00 2.50 3.00 3.50 4.00Human GradeABESS Grade0.20.40.60.80.00 2.00 2.50 3.00 3.50 4.00Missed by ABESSSpurious byABESSFig. 4.4: Comparison of human grade and ABESS for the essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram). From Fig. 4.4 shows the pictorial view of results given by ABESS for 40 test essay of different grades for the dataset “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram). In Fig. 4.4 we have seen that ABESS grades are very close to human grade. So, ABESS shows higher level of accuracy for this dataset. Fig. 4.5: Number of essays missed and spurious by ABESS for the essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) Grade Points Grade Points 1000.00 2.00 2.50 3.00 3.50 4.00True PositiveFalse PositiveFalse NegativeFrom Fig. 4.5 we see that ABESS missed some grades that have been graded by human grader and some grades have been graded by human grader that have not graded by human grader. This figure shows the errors of ABESS. Fig. 4.6: True positive, false positive and false negative of ABESS test result for the Essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) Fig. 4.6 has been made from Table 4.9. Here we seen found that ABESS has graded the essays most of those are relevant to the query i.e. most of those grades are same as the human grades of submitted essay. Moreover, ABESS has given some irrelevant results which made some false positive and false negative. Table 4.8: True positive, true negative, false positive and false negative</s> |
<s>of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) Grade No. of Human Graded Essay No. of Essay correctly Scored ABESS Missed Spurious True PositiveTrue Negative False Positive False Negative4.00 20 19 1 0 95% 0% 0% 20% 3.50 14 14 0 1 100% 0% 7% 0% 3.00 20 19 1 1 95% 0% 20% 20% 2.50 16 15 1 0 93.75% 0% % 16% 2.00 6 6 0 1 100% 0% 16% 0% 0.00 4 4 0 0 100% 0% 0% 0% Grade Points 0.00 2.00 2.50 3.00 3.50 4.00Human GradeABESS Grade0.00 2.00 2.50 3.00 3.50 4.00Missed by ABESSSpurious byABESSFrom the above Table 4.8 we have seen that ABESS has made less accurate results for the essay set “কািরগির িশkা” (Karigori Shikkha) than the “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram), because there have some variation in the human grades. The different grades have been given for the same text. Fig. 4.7: Comparison of human grade and ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) From Fig. 4.7 we have seen that ABESS grades are very close to human grades for the essay set “কািরগির িশkা” (Karigori Shikkha). Fig. 4.8: Number of essays missed and spurious by ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) Grade Points Grade Points r of Ea y1000.00 2.00 2.50 3.00 3.50 4.00True PositiveFalse PositiveFalse NegativeFrom figure 4.8 we see that ABESS missed some grades that have been graded by human grader and some grades have been graded by human grader that have not graded by human grader. This figure shows the errors of ABESS. Fig. 4.9: True positive, false positive and false negative of ABESS for the essay “কািরগির িশkা” (Karigori Shikkha) Figure 4.8 has been made from Table 4.10. Here we seen found that ABESS has retrieved the essays most of those are relevant to the query. But there are some irrelevant results shown by false positive and false negative. Table 4.9: True positive, true negative, false positive and false negative of ABESS for narrative answers of SSC level Bangla literature Grade No. of Human Graded Essay No. of Essay correctly Scored ABESS Missed Spurious True PositiveTrue NegativeFalse Positive False Negative4.00 7 5 2 2 71.42% 0% 28.57% 28.58% 3.50 0 0 0 0 0% 100% 0% 0% 3.00 9 5 4 2 55.55% 0% 22.22% 44.44% 2.50 0 0 0 0 0% 100% 0% 0% 2.00 4 3 1 3 75% 0% 75% 33.33% 0.00 0 0 0 0 0% 100% 0% 0% Grade Points 0.00 2.00 2.50 3.00 3.50 4.00Human GradeABESS GradeFrom the Table 4.9 we have seen that there have some irrelevant results for the query. We have seen in the answers of SSC level Bangla literature that there were much of variations between human grades. In some answer scripts we have seen that different grades have been given by the human grader for the same answer and same grade has been given for the different answers. Fig. 4.9: Comparison of human grade and ABESS for the narrative answers From the</s> |
<s>figure 4.7 we have seen that human grades are not very close to human grader for the answers of SSC level Bangla literature. 4.4.2 Testing ABESS by Using Precision, Recall and F1- measure The most commonly used performance measures in IR are the precision, recall and F1 measure. Precision: In the field of IR, precision is the fraction of retrieved documents that are relevant to the search: | relevant documents retrieved documents || |Precision takes all retrieved documents into account, but it can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. Grade Points r of ERecall: Recall in Information Retrieval is the fraction of the documents that a-re relevant to the query that are successfully retrieved. | relevant documents retrieved documents || relevant |F1-measure: A measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F1-measure or balanced F1-score: We have calculated precision, recall and F1 measure to evaluate the accuracy of ABESS. The scores provided by ABESS were compared to scores given by human. The ABESS scores reflected precision, recall and F1. In this paper for automated essay grading using IR technique we have defined precision, recall and F1 as follows: Precision: Precision is the number of essays correctly graded by ABESS divided by the total number of essays evaluated by ABESS. Recall: Recall is the number of essays correctly graded by ABESS divided by the total number of essays evaluated by human grader. F1: The F1 score (also called F-measure) is a measure of a test's accuracy. It’s a combine measure of precision and recall is the harmonic mean of precision and recall. In this context, we defined these measures as follows: We have calculated Precision and recall and F1 measure of ABESS for synthetic essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) which is shown in Table 4.10. Table 4.10: Precision and recall of ABESS for synthetic essay “বাংলােদেশর sাধীনতা সংgাম” (Bangladesher Shadhinota Songram) Score No. of Essay graded by Human No. of Essay correctly Scored by ABESS Missed ABESSSpurious Precision Recall F1 4.00 10 9 1 0 100 90 94.743.50 10 10 0 2 83.33 100 90.903.00 7 6 1 0 100 85.71 92.302.50 8 8 0 0 100 100 100 2.00 3 3 0 0 100 100 100 0.00 2 2 0 0 100 100 100 Total 40 38 2 2 95 95 95 Table 4.10 shows the results obtained by the ABESS while factoring in semantic similarity. In these Tables, the first column shows test grades we have assigned to the essays. The second column represents the number of essays that human grader manually assigned to each essay grade. The third column represents the number of essays correctly evaluated by ABESS. The fourth column represents the number of essay to which human grader (and not by the ABESS) assigned each score. The fifth shows the number of texts for which the ABESS (and not human grader) assigned each score. Finally,</s> |
<s>the last three columns show precision, recall and F1 values. From Table 4.10 we see that precision and recall are not same for some test set. But for the total number of essay precision, recall and F1 are same. Here we found that 95% accuracy is achieved by ABESS. We have calculated Precision and recall and F1 measure of ABESS for student submitted essays for “কািরগির িশkা” (Karigori Shikkha) which is shown in Table 4.11. In this dataset the ABESS has been trained by 100 pregraded essays and tested by 80 student submitted essays. 1000.00 2.50 3.50PrecisionRecallTable 4.11: Precision and recall of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) Score No. of Essay graded by Human No. of Essay correctly Scored by ABESS Missed ABESS Spurious Precision Recall F1 4.00 20 19 1 0 100 95 97.44 3.50 14 14 0 1 93.33 100 95.55 3.00 20 19 1 1 95 95 95.0 2.50 16 15 1 0 100 93.75 96.77 2.00 6 6 0 1 85.71 100 92.31 0.00 4 4 0 0 100 100 100 Total 80 77 3 3 96.25 96.25 96.25 From Table 4.11 we see that 95% accuracy is achieved by ABESS for the dataset “কািরগির িশkা” (Karigori Shikkha). Some essays have missed and spurious by ABESS and which made some errors. Fig. 4.10: Precision, recall of ABESS for essay “কািরগির িশkা” (Karigori Shikkha) Grade Points 0.00 2.00 2.50 3.00 3.50 4.00PrecisionRecallTable 4.12: Precision and recall of ABESS for the narrative answer Score No. of Essay graded by Human No. of Essay correctly Scored by ABESS Missed by ABESS Spurious Precision Recall F1 4.00 7 5 2 2 71.42 71.42 71.423.50 0 0 0 0 0 0 0 3.00 9 5 4 2 71.42 55.55 62.492.50 0 0 0 0 0 0 0 2.00 4 3 1 3 50.00 75.00 60 0.00 0 0 0 0 0 0 0 Total 20 13 7 7 65 65 65 Fig. 4.11: Precision and recall of ABESS for the narrative answer From the Table 4.10 we see that using synthetic essays 95% accuracy is achieved which is more accurate than others. Because using synthetic essay the human grade to human grade variation is 0. Using student submitted essay we got accuracy 96.25%as shown in Table 4.11. Grade PointsBut using the narrative answer we have found only 65% accuracy as shown in Table 4.12, this is because in the answer scripts of the SSC level students, the variation of human grade to human grade is very high. In the answer scripts, one examiner has given grade 4.00 for an answer whereas the other examiner has given grade 2 for the same text. On average our system is 89% to 95% accurate as compared with the human grader if the human to human variation is low. Table 4.13: Comparison between the performances of four AEG approaches AES Technique Accuracy IEA using LSA AEA using LSA Apex using LSA ABESS 85 – 91% 75% 59% 89% - 95% We have compared</s> |
<s>our system with the performance of the previous systems which are based on LSA. Table 4.13 contrasts the performance comparison of new technique to that of previous methods. Valenti et al. indicate that the accurate rate of LSA based IEA is from 85% to 91%. Kakkonen et al. indicate that Automatic Essay Assessor (AEA) is 75% accurate with human grade. Lemair et al. indicate the Apex (for an Assistant for Preparing EXams), a tool for evaluating student essays based on their content using LSA gives 59% accurate with human grade. We have tested our ABESS by English essay and got 89% to 95% accuracy. We have tested ABESS using English language also. Table 4.13 shows the performance of ABESS for scoring essays is very close to human grades. Chapter 5 Conclusion The use of automated scoring techniques for assessment systems raises many interesting possibilities for assessment. Essays are one of the most accepted forms of student assessment at all levels of education and have been incorporated in many of the standardized testing programs (e.g., the SAT, GMAT and GRE). Many automated essay grading (AEG) systems have been developed for commercial and academic purpose. But existing systems fail to gain higher level of accuracy as compared to human grader. In this thesis we have developed ABESS (Automate Bangla Essay Scoring System); an AEG system using Generalize Latent Semantic Analysis (GLSA) overcomes most of the drawbacks of existing AEG systems. Our GLSA based AEG system (ABESS) overcomes many limitations of LSA based AEG system. Student could get full marks from LSA based AEG by writing an essay with only keywords. But our system using GLSA which overcome this drawback of LSA based AEG systems. 5.1 Contributions Our contributions in this thesis can be described as follows: • We have developed Automated Bangla Essay Scoring System (ABESS) by using GLSA which makes clearer concepts by introducing N-gram by document matrix. Using concept matching technique, this system grades the submitted essay comparing with the training essays concepts. This system considered the proximity of words in a sentence. We have gained higher level of accuracy as compared to human grader. We have gained 89% to 95% of accuracy which is higher than that of existing AEG systems. • We have trained our system by using the student submitted answer scripts which have been graded by two human graders. For un-biased AEG grades, we have graded an essay by at least two human graders. We have graded the submitted essays based on the human graded training essays. • We have developed the prototype for scoring Bangla languages essays though it is applicable for any language. We have tested ABESS by sufficient amount of student submitted essays. • We have shown an interesting relationship between human grades and ABESS grades. This can lead to development of automatic grading systems’ not only based on multiple choice exams, but rather on semantic feature of unrestricted essays. This system can also be used in the distance learning systems where student</s> |
<s>can connect to the system and freely submit the essays. 5.2 Suggestions for Future Research In this thesis we have considered the proximity of words in a sentence. Syntax of Bangla grammar and the general structure of the essay could be considered. We have designed ABESS only for plain text. An AEG system can be developed that is applicable for the answer scripts containing images, numbers and mathematical equations. Related Publication: [01] Md. Monjurul Islam, and A. S. M. Latiful Hoque, “Automated essay scoring using Generalized Latent Semantic Analysis,” in Proceedings 2010 13th International Conference on Computer and Information Technology (ICCIT 2010), 2010, pp. 358-363. References [01] E. B. Page, “Statistical and linguistic strategies in the computer grading of essays,” in Proceedings of the International Conference on Computational Linguistics, 1967, pp. 1-13. [02] K. M. Nahar and I. M. Alsmadi, “The automatic grading for online exams in Arabicwith essay questions using statistical and computational linguistics techniques,”MASAUM Journal of Computing, vol. 1, no. 2, pp. 215-220, 2009. [03] Y. Attali and J. Burstein, “Automated essay scoring with e-rater® V.2,” The Journal of Technology, Learning and Assessment, vol. 4, no. 3, pp. 1-31, 2006. [04] L. M. Rudner, V. Garcia, and C. Welch, “An evaluation of the IntelliMetric essay scoring system,” The Journal of Technology, Learning, and Assessment, vol. 4, no. 4, pp. 1-22, March 2006. [05] L. M. Rudner and T. Liang, “Automated essay scoring using Bayes’ theorem,” The Journal of Technology, Learning, and Assessment, vol. 1, no. 2, pp. 1-22, 2002. [06] L. Bin, L. Jun, Y. Jian-Min, and Z. Qiao-Ming, “Automated essay scoring using the KNN algorithm,” in Proceedings of the International Conference on Computer Scienceand Software Engineering (CSSE 2008), 2008, pp. 735-738. [07] T. Miller, “Essay assessment with latent semantic analysis,” Journal of Educational Computing Research, vol. 29, no. 4, pp. 495-512, 2003. [08] C. Loraksa and R. Peachavanish, “Automatic Thai-language essay scoring using neural network and latent semantic analysis,” in Proceedings of the First Asia International Conference on Modeling & Simulation (AMS'07), 2007, pp. 400-402. [09] D. T. Haley, P. Thomas, A. D. Roeck, and M. Petre, “Measuring improvement in latent semantic analysis based marking systems: using a computer to mark questions aboutHTML,” in Proceedings of the Ninth Australasian Computing Education Conference(ACE), vol. 66, 2007, pp. 35-52. [10] T. Ishioka and M. Kameda, “Automated Japanese essay scoring system: Jess,” inProceedings of the 15th International Workshop on Database and Expert SystemsApplications, 2004, pp. 4-8. [11] B. Lemaire and P. Dessus, “A system to assess the semantic content of student essay,” The Journal of Educational Computing Research, vol. 24, no. 3, pp. 305-320, 2001. [12] S. Valenti, F. Neri, and A. Cucchiarelli, “An overview of current research on automated essay grading,” Journal of Information Technology Education, vol. 2, pp. 319-330, 2003. [13] S. Ghosh and S. S. Fatima, “Design of an Automated Essay Grading (AEG) system in Indian context,” in Proceedings of TENCON2008- 2008 IEEE Region 10 Conference, 2008, pp. 1-6. [14] P. W. Foltz, D. Laham, and T.</s> |
<s>K. Landauer, “Automated essay scoring: applications to educational technology,” in Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications, 1999, pp. 939-944. [15] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman,“Indexing by latent semantic analysis,” Journal of the American Society forInformation Science, vol. 41, no. 6, pp. 391-407, 1990. [16] M. M. Hasan, “Can information retrieval techniques meet automatic assessment challenges?,” in Proceedings of the 12th International Conference on Computer andInformation Technology (ICCIT 2009), Dhaka, Bangladesh, 2009, pp. 333-338. [17] G. W. Furnas, S. Deerwester, S. T. Dumais, T. K. Landauer, and K. E. Lochbaum,“Information retrieval using a singular value decomposition model of latent semanticstructure,” in Proceedings of 11th annual int'l ACM SIGIR Conference on Research and Development in Information Retrieval, 1988, pp. 465-480. [18] C. A. Kumar, A. Gupta, M. Batool, and S. Trehan, “Latent semantic indexing-based intelligent information retrieval system for digital libraries,” Journal of Computing and Information Technology - CIT 14, vol. 3, pp. 191-196, 2006. [19] T. Kakkonen, N. Myller, J. Timonen, and E. Sutinen, “Comparison of dimensionreduction methods for automated essay grading,” Journal of Educational Technology & Society, vol. 11, no. 3, pp. 275-288, 2008. [20] A. M. Olney, “Generalizing latent semantic analysis,” in Proceedings of 2009 IEEE International Conference on Semantic Computing, 2009, pp. 40-46. [21] J. Mayfield and P. McNamee, “Indexing using both n-grams and words,” in Proceedings of the Seventh Text REtrieval Conference (TREC-7), 1998, pp. 419-423. [22] A. Güven, Ö. Bozkurt, and O. Kalıpsız, “Advanced information extraction with n-gram based LSI,” in Proceedings of World Academy of Science, Engineering and Technology, vol. 17, 2006, pp. 13-18. [23] M. J. Alam, N. UzZaman, and M. Khan, “N-gram based statistical grammar checker for Bangla and English,” in Proceedings of the 9th International Conference onComputer and Information Technology (ICCIT 2006), 2006, pp. 119-122. [24] E. Brill, “Some advances in rule based part of speech tagging,” in Proceedings of The Twelfth National Conference on Artificial Intelligence (AAAI-94), 1994, pp. 722-727. [25] F. Wild, C. Stahl, G. Stermsek, and G. Neumann, “Parameters driving effectiveness of automated essay scoring with LSA,” in Proceedings International Computer Assisted Assessment (CAA) Conference, Loughborough, UK. 2005, pp. 485–494.</s> |
<s>ORI GIN AL PA PERA dependency annotation scheme for Bangla treebankSanjay Chatterji • Tanaya Mukherjee Sarkar • Pragati Dhang •Samhita Deb • Sudeshna Sarkar • Jayshree Chakraborty • Anupam Basu� Springer Science+Business Media Dordrecht 2014Abstract Dependency grammar is considered appropriate for many Indian lan-guages. In this paper, we present a study of the dependency relations in Banglalanguage. We have categorized these relations in three different levels, namelyintrachunk relations, interchunk relations and interclause relations. Each of theselevels is further categorized and an annotation scheme has been developed. Bothsyntactic and semantic features have been taken into consideration for describingthe relations. In our scheme, there are 63 such syntactico–semantic relations. Wehave verified the scheme by tagging a corpus of 4167 Bangla sentences to create atreebank (KGPBenTreebank).Keywords Dependency structure � Syntactico–semantic relation �Paninian karak � Modern Bangla grammar and language1 IntroductionIn this paper, we present a dependency annotation scheme for Bangla language. Therelations are prepared taking into account modern grammatical and languagestructures of Bangla mostly studied from Chatterji (2003) and the dependencyrelations used in other Indian languages like Hindi and Urdu (Sharma et al. 2007;Bhatt et al. 2009). Most of these existing Indian language treebanks follow Paniniangrammatical model which is discussed in Bharati et al. (1999).S. Chatterji (&) � T. M. Sarkar � P. Dhang �SamhitaDeb � S. Sarkar � A. BasuDepartment of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, Indiae-mail: sanjaychatter@gmail.com; sudeshna@cse.iitkgp.ernet.inJ. ChakrabortyHumanities and Social Sciences, Indian Institute of Technology, Kharagpur, India123Lang Resources & EvaluationDOI 10.1007/s10579-014-9266-3By observing these grammatical models and dependency relations, we have triedto get a complete list of dependency relations that captures the Bangla language.The relations are described by the syntactic and semantic features occurringbetween the words.In Bangla, syntax is not a strong aspect in identifying the relations. This isbecause the word order in Bangla sentences are relatively less rigid and suffixes andpostpositions have different role in different contexts. Therefore, we haveconsidered a balance of syntactic and semantic features to define the Bangladependency grammar.Computational processing for Bangla is challenging because of the scarcity ofannotated resources. In the absence of treebank of Bangla the work of parsing andsome other studies which could have helped in machine translation, questionanswering, etc. have been hindered.Different grammars have been advocated for building treebanks in differentlanguages like phrase structure grammar [in Penn Treebank Marcus et al. (1993)] anddependency grammar [in Prague Dependency Treebank Hajič et al. (1996)]. We haveused a dependency grammar based scheme to build a treebank of Bangla sentences.The scheme has been evolved by studying carefully the corpus considered forannotation (4167 Bangla sentences) as well as other corpus. The treebank created forthese 4167 Bangla sentences is referred to as KGPBenTreebank1.The rest of the paper is organized as follows. Some related work has been discussedin Sect. 2. In Sect. 3, we have discussed some assumptions used to define the schemeand in building the KGPBenTreebank. The relation set of the scheme have beencategorized and defined in Sects. 4 and 5, respectively. In Sect. 6, we have analyzedthe scheme as well as the annotation process.</s> |
<s>In Sect. 7, we have compared therelations in KGPBenTreebank and Anncorra. Finally, Sect. 8 contains the conclusion.See ‘Appendix 1’ for the comprehensive dependency relationship set of the scheme.In the paper, examples are written in ‘‘Indian languages TRANSliteration’’ (itrans)Chopde (2000) encoding and the mappings of the itrans encoding with glyphs in theBangla and Hindi scripts are shown in ‘Appendix 2’. Each example contains a Banglasentence followed by the gloss and translation in English. Glosses contain root andfeatures separated by—(dash) symbol. Here, gen (genitive), loc (locative), acc(accusative), pl (plural), and sp (specifier) are the nominal features and past, pre(present), fut (future), prog (progressive), per (perfect), par (participle), inf (infinite),nf (nonfinite) and neg (negative) are the verbal features.2 Related workMajor treebanks are created on the basis of phrase structure or dependency structureof the language. The phrase structure grammar also known as context free grammar1 The dependency grammar for Bangla language and the Bangla treebank is created under the project‘‘The Bangla Treebank’’. This project is supported by Linguistic Data Consortium for Indian Languages(LDC-IL) built by MHRD, Govt. of India under the aegis of the Central Institute of Indian Languages,Mysore, India. See the link for details. http://www.cel.iitkgp.ernet.in/*oldtools/kgpbentreebank.html.S. Chatterji et al.123http://www.cel.iitkgp.ernet.in/~oldtools/kgpbentreebank.htmlwas used to build some English treebanks. In these treebanks, the intermediatenodes are phrasal nodes and the leaf nodes indicate the words.One of the earliest such attempt was the ATR/Lancaster Treebank project byBlack et al. (1993, 1996) for the American English corpus of 730,000 words.Another contemporary attempt was made in Penn Treebank Marcus et al. (1993)with the same concept and tags to annotate the spoken and written AmericanEnglish corpus of 4.5 million words. The tags used in these two treebanks include14 phrase structures: Adjective phrase, Adverb phrase, Noun phrase, Prepositionalphrase, Simple declarative clause, Clause introduced by subordinating conjunctionor 0, Direct question introduced by wh-word or wh-phrase, Declarative sentencewith subject-aux inversion, subconstituent of SBARQ excluding wh-word or wh-phrase, Verb phrase, wh-adverb phrase, wh-noun phrase, wh-prepositional phrase,constituent of unknown or uncertain category. The Penn Treebank includes fourNULL elements namely *, T, 0(zero) and (rarely) NIL for four empty subjectpositions. The uses of these NULL elements are described in Santorini andMarcinkiewicz (1991).The annotation tagset of the above phrase structure treebanks have been widelyused by others to annotate other treebanks. For example, a Chinese treebank wascreated in Xue et al. (2005) based on the Penn Treebank annotation scheme. Threeyear Wall Street Journal (WSJ) collection of approximately 30 million words wasannotated in Charniak et al. (2000) with the same structure as Penn Treebank,except for some additional co-reference marking.Besides phrase structure treebanks, attempts have been made to build treebanksby collecting the framesets for each lexeme of the sentence. The English PropositionTreebank (Propbank) (Palmer et al. 2005) used the semantic roles of the verbs andanalyzed the frequency of syntactic or semantic alternations in the annotation of thePenn Treebank comprehensive corpus.However, phrase structure grammar is not so appropriate for annotating thetreebanks for all languages. Certain Indian languages, some Roman languages(Italian and Spanish) etc., where word structure is not so rigid are some examples ofthem.</s> |
<s>For these languages, dependency grammar has been considered as analternative to phrase structure grammar. Nevertheless, dependency grammar hasalso been successfully used in English treebanks by Karlsson et al. (1995) andMcCord (1990).The Prague Dependency Treebank Hajič et al. (1996, 2000, 2001) is one of thepioneering work in this direction. In this treebank, 40 syntactic dependencyfunctions has been defined between governor and its dependent nodes, such as:actor/bearer, addressee, patient, origin, effect, cause, regard, concession, aim,manner, extent, substitution, accompaniment, locative, means, temporal, attitude,cause, regard, directional, benefactive, comparison; there are also specific functionsfor dependents on nouns, for example material, appurtenance, restrictive anddescriptive adjunct, the relation of identity, etc. Here, each word and eachpunctuation mark has been considered as one node. No extra node has been insertedin the tree except root node.A Quranic Arabic corpus was annotated (Karlsson et al. 1995) with the help ofdependency relations of traditional historical Arabic grammar known as iráb. Here,A dependency annotation scheme for Bangla treebank12345 dependency relations2 are categorized into 5 top categories of dependencies:nominal, verbal, phrases/clauses, adverbial and particle.Bhatt et al. (2009) have annotated a multilayered treebank for Hindi and Urdubased on the dependency relations of Paninian grammatical model Bharati et al.(1999). In the model, the dependency relations in the sentence were defined betweenthe modifier and the modified words. In the treebank named, Bharati et al. (2002)and Sharma et al. (2007) have considered the chunk as the basic unit of adependency structure. They have also annotated the corpus using phrase structuregrammar with a limited set of category labels: NP, AP, AdvP, VP and CP.The treebanks are created both manually and semi-automatically. For example, tobuild the Penn Treebank, the POS tagged sentences were automatically parsed toyield a skeleton syntactic structure and then corrected by human annotators. Theaim was to develop a large sized annotated corpus with minimum human effort. ThePrague Dependency Treebank was created using 3 level annotation process:morphological annotation, syntactic annotation and linguistic meaning annotation.Here, the trees contain three parts of each node: original word form, morphologicalinformation and syntactic tag. The online annotation of Quranic Arabic corpus wasdone in a multistage approach: automatic rule-based tagging, initial manualverification, and online supervised collaborative proofreading. There were approx-imately 100 unpaid volunteer annotators and a small number of expert annotators(or supervisors or reviewers) annotating through a popular attracting website.3 Categorization of the relation setA clause is a group of words with a subject and a predicate. A clause giving acomplete proposition is an independent clause, whereas a dependent clause dependson another clause for making the proposition complete.A sentence can be either a single independent clause or it may contain, alongwith an independent clause, one or more dependent clauses. We have consideredboth the shallow level intrachunk relations and deep interchunk and interclauserelations to give a complete analysis of the Bangla sentence.A word-chunk (chunk) in a sentence is a syntactically correlated non-overlappinggroup of words. In Bangla sentences two chunks can interchange their places bykeeping the meaning of the sentence unaltered. Within a chunk some words are contentwords while some are function words. A content</s> |
<s>word is a word which carries meaningindependently. One of the content words of a chunk is the head of the chunk.We have categorized the relation set into three levels of relations. The relationsinside a chunk are tagged as intrachunk relations at Level 1. At Level 2, we havetagged the relations between the chunks within a clause. There are three suchrelation types: Case/Karak relations and Modifier relations and a few otherinterchunk relations like conjunct, particle, symbol, etc. Finally, the interclauserelations are tagged at Level 3.2 The list with detailed description of the dependency relations can be seen at http://corpus.quran.com/documentation/syntaxrelation.jspS. Chatterji et al.123http://corpus.quran.com/documentation/syntaxrelation.jsphttp://corpus.quran.com/documentation/syntaxrelation.jspEach level of relation is further classified based on syntactico-semantic features.We have used syntactic features like suffix, postposition, and morphologicalfeatures and semantic features like the type of action, obligation in doing action, andanimacy. We use syntactic features till syntax is able to analyze. In case ofambiguity, it goes for semantic features.The categorization of relations are mentioned below along with the relation tagsof the classifications. Each class of relation is defined in Sect. 5 starting from Level1 relations to Level 3 relations.Level 1: Intrachunk Relations: Intrachunk relations are ppl, stc, vx, pof, redup, andfrag.Level 2: Interchunk Relations: karta (k1d, k1e, k1p, k1s, and k1g), karma (k2t,k2m, k2g, k2u, and k2s), karan (k3), apadan (k5p, k5s, k5t, and k5d),adhikaran (k7p, k7t, k7d, and k7s), and Other case-alike relations (rh, ru,des, r6v, compr, and sym) are categorized as Case r6, ras, rasneg, nnmod,jnmod, dnmod, pronmod, pnmod, anmod, adv, vmod, neg, and acomp arecategorized as Modifier. ccof, pcc, rad, par, qs, end, and sym arecategorized as Other interchunk relations.Level 3: Interclause Relations: Interclause relations are ref, clausal*, clausalcomp,and comp.A dependent clause in a sentence is either a complement of the main clause ormay modify the main clause. Interclause relations indicate the role of dependentclauses. Independent clauses are joined by a conjunct word and no interclauserelation is used in such cases.4 Some salient features of the relationset and the KGPBenTreebankIn the process of building the KGPBenTreebank, we have defined the relations of thescheme and used them to annotate Bangla sentences. The annotated sentences arerepresented in two ways. In the textual representation, the dependency relationbetween the pair of words has been shown in the format DEP(CHILD, PARENT)meaning that there is a dependency relation DEP, where the dependent contains theword CHILD and the head contains the word PARENT. In the graphical represen-tation, each sentence is represented in a rooted directed tree structure where each edgeand node are labelled. In the structure, head words are used as label of the parent nodes,dependents are used as label of the child nodes and the dependency relations are usedas label of the edges. The directions are from parent to child nodes.During this exercise, we have considered certain features to simplify the task.These are discussed in the following subsections.4.1 Syntactico-semantic relation setIn the scheme, the head word can have many dependents, but a dependent is relatedonly to one head. A word which is not dependent to any head word is called the</s> |
<s>rootA dependency annotation scheme for Bangla treebank123of the sentence. Dependency relations between the head word and its dependentscan be explained syntactically as well as semantically. We have considered thedefined syntactico-semantic relation set [incorporating thematic and functionalsemantic relations occurring at the sentence level] with the major inclinationstowards logical relations to capture the dependency relations.4.2 NULL node insertionBangla, like most of the Indian languages, has in general a subject-object-verbsentence structure with verb being the root of the sentence. However, a word or asymbol in a Bangla sentence may be dropped at the surface level. Some of suchcases are the copula verb in present tense when it takes attributes, the verbs that canbe recovered by the presence of another verb, and the conjunts connecting twowords, phrases or clauses. For these cases it is necessary to define a placeholder sothat the relations can be shown. We put a <NULL> as a placeholder of the droppedwords as discussed below.a) Bangla copula verbs are dropped in the present tense for the positive polarityinstances. In interchunk relations, many of the relations are verb centric. A<NULL> verb is inserted in place of both attributive and existential copulawhen it is dropped in the surface.(1) rAma bhAla chhele <NULL>.[Ram good boy]Ram is a good boy.b) A <NULL> verb is inserted on recoverability condition. Here, recoverability ispossible from the occurrence of another verb in the sentence.(2) Ami Aja dillIte <NULL> Ara kAla kolakAtAYa yAba.[I today Delhi-loc and tomorrow Kolkata-loc go-fut]Today I will go to Delhi and tomorrow to Kolkata.c) A <NULL> conjunction is inserted to connect two words, phrases or clauses.(3) pena <NULL> penasila <NULL> khAtA eba.n ba;i.[pen pencil notebook and book]Pen pencil notebook and book.4.3 Same tree for different sentencesAs Bangla is a relatively free word order language, the words of the sentence canchange their places. Some of the representations are unmarked (most fluent) and someare marked (rare). An unmarked simple Bangla sentence is shown in Example (4).(4) mohana Ama khAchchhe.[Mohan mango eat-pre,prog]Mohan is eating mango.This Bangla sentence is a simple sentence containing two nouns and a verb.‘‘mohana’’ is the agent of this sentence, ‘‘Ama’’ is the object and both are dependentS. Chatterji et al.123of the verb ‘‘khAchchhe’’. However, according to the word being stressed and thefocus of discourse level semantics, the words of this sentence can be placed indifferent orders. The trees of all sentences containing these three words will remainthe same as Fig. 1 (k1d, k2t and end labels are defined in Sect. 5).4.4 Ambiguity in the sentenceAn ambiguous sentence is one having multiple syntactic structures. Like in manyother languages Bangla also include ambiguous sentences. For computationalsimplicity, for each ambiguous sentence we have chosen one interpretation whichseems to be the most appropriate according to the context. We have selected themost likely interpretation of ambiguous sentence in the current context.5 Definition of Bangla dependency relationsIn this section, we have defined the dependency relations starting from Level 1 toLevel 3. Intrachunk relations (Level 1) of the bottom level are defined in Sect. 5.1.Among interchunk relations of Level 2, Case, Modifier</s> |
<s>and few other interchunkrelations are defined in Sects. 5.2, 5.3 and 5.4, respectively. Finally, interclauserelations of the top level are defined in Sect. 5.5.Each definition of dependency relation contains English and Bangla name of therelation separated by forward slash (/) and an acronym of the relation used fortagging KGPBenTreebank. Each definition is explained with an example Banglasentence where a pair of words has the corresponding dependency relation. Thecorresponding dependency relations are shown in textual representation.5.1 Intrachunk relationsIn a chunk, the meaning of a function word is expressed through its relation with thecontent word. Prepositions and postpositions in the noun chunk and the auxiliaryverbs in the verb chunk are examples of function words. Some Bangla postpositionsare generated from some verb roots which are otherwise used as main verbs. Somemain verbs are also used as auxiliary verbs. The relation between two words withina chunk is identified as intrachunk relation. The modifier intrachunk relations areFig. 1 A dependency treeA dependency annotation scheme for Bangla treebank123not referred here and will be discussed later (Sect. 5.3). Some of the intrachunkrelations are discussed below.Postposition/Anusarga (ppl): Postpositions like ‘theke’ (from), ‘diYe’ (by), etc.and prepositions like ‘binA’ (except) in the noun chunks (NP) are related to headwords by Postposition/Anusarga (ppl) dependency relation.(5) se skula theke phirachhe.[he school from return-pre,prog]He is returning from school.ppl(theke, skula)Spatio-temporal connection/Sthan-samaygata samparka (stc): Bangla nouns indi-cating space or time (space-time nouns) like ‘bhitara’ (inside), ‘bAire’ (outside),‘upara’ (above), ‘nicha’ (below), ‘Age’ (before), ‘pare’ (after), etc. may follow anoun or a pronoun with genitive marker. These space-time nouns may be followedby postposition. We relate the preceding noun or pronoun with space-time noun bySpatio-temporal connection/Sthan-samaygata samparka (stc) dependency relation.(6) se bA.Dira bhitara theke DAkachhe.[he home-gen inside from call-pre,prog]He is calling from the inside of the house.stc(bA.Dira, bhitara)Auxiliary verb/Sahayak kriya (vx): Auxiliary verbs in the verb chunks (VP) arerelated to main verbs by Auxiliary verb/Sahayak kriya (vx) dependency relation.(7) bAchchArA hese uThala.[child-pl laugh-nf rise-past]The children began to laugh.vx(uThala, hese)Part of/Kriya antargata bisheshya (pof): In conjunct verb chunks, the main verbscontain a nominal or adjectival part followed by a verb part. The first part is relatedto the second part by Part of/Kriya antargata bisheshya (pof) dependency relation.(8) ba;i melA bhAlabhAbe anuShThita haYechhe.[book fair well happen-par have-pre,per]The book fair has been executed well.pof(anuShThita, haYechhe)Reduplication/Shabda dbaita (redup): Reduplication/Shabda dbaita (redup) depen-dency relations include nominal, pronominal and adjectival reduplications indicatingplurality; verbal, adverbial, postpositional reduplications indicating continuity;reduplicated words indicating accuracy; onomatopoetic words, hedged expressionsand also echo words. We consider the first occurrence of such pair of words as the headword and the second word as the dependent. In the following example, we haveassigned two serial numbers (1 and 2) to disambiguate the head and the dependent.(9) bAchchArA galpa karate(1) karate(2) skula theke phirachhe.[child-pl story do-inf do-inf school from return-pre,prog]S. Chatterji et al.123The children are returning from school chatting.redup(karate(2), karate(1))Fragment/Bhagnamsha (frag): Suffixes can be written either attached to the word orindependently. The suffix occurring independently is related to the word byFragment/Bhagnamsha (frag) dependency relation.(10) manamohana si.nha (bhAratera pradhAnamantrI) ra bidesha saphara Achhe.[Manmohan Singh (India-gen</s> |
<s>Prime–Minister) -gen foreign trip be-pre]Manmohan Singh (the Prime minister of India) has a foreign trip.frag(si.nha, ra)5.2 Case/KarakThe basic karak relations and their categories, as discussed in the Paninian framework(Bharati et al. 1999), from the perspective of natural language processing (NLP) andalso as they are defined in Bangla traditional grammar are of 6 types: Karta, Karma,Karan, Sampradan, Apadan and Adhikaran. In the scheme used by us, the Paniniankarak relations, in general, are accepted with some changes as suggested in modernBangla grammars Chatterji (2003), Chakravarty (2010). Some Bangla noun verbrelations have been discussed in Chatterji et al. (2009).It is difficult to consider sampradan karak relation in Bangla from syntax exceptin the instances where the words ‘dAna’ (donate) and ‘sampradAna’ (donate) areexplicitly mentioned. This is because in Bangla sampradan are considered when thenature of the verb indicated that something is given selflessly.Some Bangla grammarians have advocated that sampradan karak may not beconsidered separately in Bangla. In this context we quote Suniti Kumar Chatterjifrom Chatterji (2003). He says that while Sanskrit has special bibhakti forsampradan; Bangla does not have this. Some people use sampradan in Bangla forcompatibility with Sanskrit; while others merge sampradan with karma. Heconsiders the second approach reasonable (Page 247) and merged the sampradanwith gauna karma (Page 241 and 299).5.2.1 Subject/KartaSubject/Karta is the one who does, experiences or exists. It can also be or becomesomething. However, it is referred to as Passive Karta when it acts as the doer of anaction in a passive sentence. In Bangla, the karta may take a wide range of suffixes,like genitive, nominative, accusative, locative, etc. The verbs act in different waysand accordingly their subjects are defined. Because of the subcategorization of thesubject, instead of including subject in the relationset we have included itssubcategories.Doer subject/Kriya sampadak karta (k1d): When the verb indicates some mentalor physical exercise by an animate karta then its subject is defined as Doer subject/Kriya sampadak karta (k1d).A dependency annotation scheme for Bangla treebank123(11) bulabulite dhAna kheYechhe.[Indian–nightingale-loc paddy eat-pre,per]Indian nightingale has eaten paddy.k1d(bulabulite, kheYechhe)Experiencer subject/Anubhab karta (k1e): When the verb expresses mental state,emotion or event without the subject’s conscious effort then its subject is defined asExperiencer subject/Anubhab karta (k1e).(12) AmAra shIta karachhe.[I-gen cold do-pre,prog]I am feeling cold.k1e(AmAra, karachhe)Passive subject/Paroksha karta (k1p): When the verb indicates an action in passiveconstruction then its subject is defined as Passive subject/Paroksha karta (k1p). Thissubject acts as logical subject of the sentence or the verb. It is followed bypostpositions ‘dbArA’ (by), ‘diYe’ (by) and ‘karttRRika’ (by) or may take the suffix‘ra’.(13) AnandamaTha ba�Nkimachandra kartRRika rachita <NULL>.[Anandamath Bankimchandra by write-par]Anandamath is written by Bankimchandra.k1p(ba�Nkimachandra, <NULL>)Even though this is the usual Bangla device for passivization, there may beconfusion in certain cases. For example, ‘ra’ suffix is also attached with the karta ofactive voice verbs (See example 12) and ‘dbArA’, ‘diYe’ and ‘karttRRika’postpositions are also attached with karan. Again same word forms of some verbsare used both in active and passive voice. For example, ‘haYechhe’ (has been done)in the following first example is in passive voice and in the second one ‘haYechhe’(has</s> |
<s>given birth) is in active voice. Therefore, we assigned separate tag for the kartaof passive voice verbs though it conveys redundant information in many cases.• AmAra dbArA ei kAja haYechhe.[I-gen by this work have-pre,per]This work is done by me.• tAra jbara haYechhe.[he-gen fever have-pre,per]He has caught fever.Noun of proposition/Bidheya karta or Samanadhikaran (k1s): Noun of proposition/Bidheya karta or Samanadhikaran (k1s) is the complement of the karta in thesentence.(14) tini bhAla shikShaka chhilena.[he good teacher be-past]He was a good teacher.k1g(tini, chhilena) k1s(shikShaka, chhilena)S. Chatterji et al.123General subject/Sadharan karta (k1g): The subject is defined as General subject/Sadharan karta (k1g) in the following contexts. The subjects which do not belong toany of the subcategories mentioned above (i.e., k1d, k1e, k1p, and k1s) are alsotagged as k1g.1. Subject of copula or be verb.2. Subject of a verb in which the agent of the action is not specified, though it maybe implied.3. The subject which is in r6 relation with another noun or pronoun.(15) AmAdera siTi kaleja khuba bhAla <NULL>.[I-pl,gen city college very good]Our city college is very good.k1g (kaleja, <NULL>)5.2.2 Object/KarmaObject/Karma refers to an object undergoing the action or a person or an objectbeing affected by the action. It also includes some things or positions beingachieved or attained through the action. A Bangla sentence with ditransitive verb orcausative verb may have two karma. Similarly, transitive verbs and non-transitiveverbs may have one and zero karma, respectively. In both active and passiveconstructions, the karma is tagged in the same way. Because of the subcategori-zation of the object, instead of including object in the relationset we have includedits subcategories.Transitive object/Sakarmak karma (k2t): Transitive object/Sakarmak karma (k2t)is the karma of a transitive verb.(16) bhUmikampa sAjAno gochhAno shaharaTA dhba.nsa karala.[earthquake decorating sorting city-sp destroy do-past]Earthquake destroyed the beautiful city.k2t(shaharaTA, karala)Direct object/Mukhya karma (k2m) & Indirect object/Gauna karma (k2g): The twoobjects of a ditransitive verb of both passive and active sentences may be tagged asfollows. Direct object/Mukhya karma (k2m) is the karma which undergoes theaction. Indirect object/Gauna karma (k2g) is the one which is affected by the action,or a recipient or a beneficiary of the action. These two karma generally co-occur.(17) Ami mAke chiThi likhachhi.[I mother-acc letter write-pre,prog]I am writing a letter to my mother.k2g(mAke, likhachhi) k2m(chiThi, likhachhi)Purposive object/Uddyeshya karma (k2u) & Predicative object/Bidheya karma(k2s): In the case of active as well as passive sentences with ditransitive verb, if thetwo objects are in complementary relation, the one which takes the complement isA dependency annotation scheme for Bangla treebank123defined as Purposive object/Uddyeshya karma (k2u). This karma is attached withthe suffix ‘ke’. Another one which stands as a complement is defined as Predicativeobject/Bidheya karma (k2s).(18) tini buddhadebake parameshbarera abatAra balena.[he Buddhadeb-acc parameswar-gen incarnation tell-pre]He regards Buddhadeb as the incarnation of God.k2u(buddhadebake, balena) k2s(abatAra, balena)A ditransitive verb may take a pair of arguments which refer to the same thing orperson. We tag them as k2u and k2s. A ditransitive verb may also take a pair ofarguments which refer to two different things or persons. We tag them as k2m andk2g. The nature of</s> |
<s>such pairs of arguments are different. For example, in thefollowing sentence (i), ‘gAndhike’ (Gandhi-to) and ‘bApu’ (Bapu) are two karma of‘balA haYa’ (called). These two karma refer to same person. Therefore, they aretagged as k2u and k2s, respectively. In the following sentence (ii), ‘bApu’ (Bapu) isthe agent which is tagged as k1d and ‘gAndhike’ (Gandhi-to) and ‘kathATA’ (thewords) are two karma of ‘balala’ (told) and they are tagged as k2g and k2m,respectively.(i.) gAndhike bApu balA haYa.[Gandhi-acc Bapu say be-pre]Gandhi is called Bapu.(ii.) bApu gAndhike kathATA balala.[Bapu Gandhi-acc word-sp say-past]Bapu told the words to Gandhi.5.2.3 Instrumental/KaranInstrumental/Karan(k3):Instrumental/Karan (k3) refers to a thing or object whichacts as an instrument or means for performing an action or the occurrence of anaction.(19) gAdhAke chAbuka mAro.[donkey-acc whip beat-pre]Whip the donkey.k3(chAbuka, mAro)5.2.4 Ablative/ApadanAblative/Apadan is the source or origin of an action or the point of time whichindicates the source of an action or the distance between two places. Because of thesubcategorization of the ablative, instead of including ablative in the relationset wehave included its subcategories.Place related ablative/Sthanbachak apadan (k5p): Place related Ablative/ Sthan-bachak apadan (k5p) refers to the source or origin of an act.S. Chatterji et al.123(20) nadIra ghATa theke ghaTa bhese ela.[river-gen bank from pot float-nf come-past]The pot came floating from the river bank.k5p(ghATa, bhese)State related ablative/Abasthabachak apadan (k5s): State related ablative/ Abastha-bachak apadan (k5s) is the state from where the action takes place. Here karta is notdisplaced, only object is displaced away from the state of karta.(21) AmAra ghara theke mandirera chU.DA dekhA yAYa.[I-gen house from temple-gen pinnacle see go-pre]The pinnacle of the temple can be seen from my house.k5s(ghara, dekhA)Time related ablative/Kalbachak apadan (k5t): Time related ablative/Kalbachakapadan (k5t) refers to the point of time which indicates the source of an action.(22) sakAla theke bRRiShTi nemechhe.[morning from rain get–down-pre,per]It is raining since morning.k5t(sakAla, nemechhe)Distance related ablative/Duratbabachak apadan (k5d): Distance related ablative/Duratbabachak apadan (k5d) indicates the distance between two places one ofwhich is the starting point and the other one is the ending point. The starting pointplace is tagged as k5d. The corresponding sentence contains two place names, adistance measure (either real or abstract) and a copula.(23) dilli theke kolakAtA bahu dUre <NULL>.[ Delhi from Kolkata too far-loc]Delhi is too far from Kolkata.k5d(dilli, <NULL>)5.2.5 Locative/AdhikaranLocative/Adhikaran may be either the place where the action takes place, or thetime when it takes place, or the domain about which it takes place or the state inwhich it takes place. Because of the subcategorization of the locative, instead ofincluding locative in the relationset we have included its subcategories.Place related locative/Deshadhikaran (k7p): Place related locative/Deshadhik-aran (k7p) refers to the place that indicates the occurrence of the act.(24) bA.Dite phulera gAchha Achhe.[house-loc flower-gen tree be-pre]There are flowering trees in the house.k7p(bA.Dite, Achhe)Time related locative/Kaladhikaran (k7t): Time related locative/Kaladhikaran (k7t)is that point or duration of time which indicates the occurrence of the act.A dependency annotation scheme for Bangla treebank123(25) bhore sUrya oThe.[dawn-loc sun rise-pre]The Sun rises in the dawn.k7t(bhore, oThe)Domain related locative/Bishayadhikaran (k7d): Domain related locative/ Bisha-yadhikaran (k7d) is the thing or area,</s> |
<s>or things constituting a domain which can bepursued for study, or can be pursued as a profession.(26) tArA sAhitye paNDita <NULL>.[he-pl literature-loc expert]They are expert in literature.k7d(sAhitye, <NULL>)State related locative/Bhabadhikaran (k7s): State related locative/Bhabadhikaran(k7s) refers to the state of being for something or someone at a particular time.(27) tArA khuba sukhe Achhe.[he-pl very happy-loc be-pre]They are living with happiness.k7s(sukhe, Achhe)5.2.6 Other case-alike/Anyanya karak-samaOther than the five karak relations mentioned above, there are some more relationsbetween a noun and a verb and between two nouns which are often confused withthe karak relations.Reason/Hetu (rh): Reason/Hetu (rh) represents the reason of the action.(28) bhaYe bhule yAYa debatAra nAma.[fear-loc forget-nf go-pre God-gen name]The name of God is forgotten out of fear.rh(bhaYe, bhule)Purpose/Uddeshya (ru):Purpose/Uddeshya (ru) represents the purpose of the action.(29) ratana unnatira janya kaThora parishrama kare.[Ratan promotion-gen for hard work do-pre]Ratan works hard for his promotion.ru(unnatira, kare)Destination/Gantabyasthal (des): Destination/Gantabyasthal (des) is the placerelated argument of the nontransitive moving verbs ‘yAoYA’ (go), ‘bhramana karA’(travel), etc. Instead of locative (‘e’, ‘te’, etc. bibhakti) markers they may takenominative marker (0 bibhakti). It is a karma but it behaves like an adhikaran.(30) rAma bA.Di giYechhila.[Ram home go-past,per]S. Chatterji et al.123Ram went home.des(bA.Di, giYechhila)Possession/Dakhal (r6v): Possession/Dakhal (r6v) is the relation between a nounand a verb where the noun acts as an owner. The owned part is considered as karma.This owner noun takes genitive marker and the corresponding verb is a ‘be’ verb(existential).(31) rAmera ekaTA meYe Achhe.[Ram-gen one-sp daughter be-pre]Ram has a daughter.r6v(rAmera, Achhe)Comparison/Taratamya (Compr): Comparison/Taratamya (Compr) indicates acomparison between two noun phrases or between their attributes. For example,the phrase, ‘‘mitAra theke sundarI meYe’’ [The girl more beautiful than Mita]indicates the comparison between the ‘sundarI’ attribute of ‘mitA’ and ‘meYe’.(32) ekhAne aneke phuTabalera theke krikeTa pachhanda karena.[here-loc many-loc football-gen from cricket like do-pre]Here many people like cricket more than football.compr(phuTabalera, krikeTa)Similarity/Sadrishya (sim): Similarity/Sadrishya (sim) describes the similaritybetween two noun phrases or between the attributes of two noun phrases.(33) se AmAra bonera mata <NULL>.[he I-gen sister-gen like]She is like my sister.sim(bonera, se)5.3 Modifier relationsMost of the modifier relations can also be considered as intrachunk relations.However, sometimes, it is observed that the modifier relations exist between twodifferent chunks.Genitive/Sambandha (r6): Genitive/Sambandha (r6) refers to the relationbetween a noun or a pronoun with genitive marker and another noun. The r6dependency relation relates certain pairs of nouns. Some possible relations existsbetween such noun pairs are shown in Table 1 with an example of each.For differentiating the r6 relation from other relations of the noun with genitivemarker in a sentence, the following points may be considered.1. A noun or pronoun with genitive marker, preceding a postposition is not acandidate of r6 relation.2. A noun or pronoun with genitive marker that precedes a noun of a complexpredicate is not a candidate of r6 relation.A dependency annotation scheme for Bangla treebank1233. A noun or pronoun with genitive marker, which are related to a mental verbgroup is not a candidate of r6 relation.4. A noun or pronoun with genitive marker, which is a karta of a passive verb isnot</s> |
<s>a candidate of r6 relation.5. A noun or pronoun with genitive marker, which is related with a verb by r6v isnot a candidate of r6 relation.Associative relation/Saharthak sambandha (ras) & Non-associative relation/Namarthak sambandha (rasneg): Associative relation/Saharthak sambandha (ras)occurs with a noun or pronoun which accompanies another noun or pronoun in kartaor karma position and it is followed by the postpositions ‘sa�Nge’ (with), ‘sAthe’(with), or ‘diYe’ (by). The role of saharthak noun is same as the role of the nounwith which it is attached. If it is attached with a karta, its role is a karta and if it isattached with a karma, its role is a karma. When the occurrence of such sambandhais negated, it is defined as Non-associative relation/Namarthak sambandha (rasneg).Namarthak sambandha takes postpositions ‘chhA.DA’, or ‘binA’.(34) simA nIrAra sAthe melAte gela.[Sima Nira-gen with fair-loc go-past]Sima went to the fair with Nira.ras(nIrAra, simA)(35) se chini chhA.DA chA khete pachhanda kare.[he sugar without tea eat-inf like do-pre]He likes to drink tea without sugar.rasneg(chini, chA)Table 1 Relations between the noun pairs which are included in r6Relation Bangla Example English TranslationPossession rAjAra rAjya king’s kingdomPart shishura mukha face of a babyLocation jalera mAchha fish in waterFunction khAoYAra thAlA plate for eatingSource sApera bhaYa fear from snakeMaterial sonAra gaYanA ornaments made of goldMeasurement du ghanTAra patha two hour’s journeyCause effect sUryera Alo sunlightAttribute premera galpa love storySequence pA.Nchera pRRiShThA page number fiveSimile j� nAnera Alo light of wisdomObject Ishbarera sAdhanA worship of GodProgeny gAchhera phala fruits of treeAdjectival modifier guNera chhele boy in good qualityS. Chatterji et al.123Noun noun modifier/Sanyogmulak bisheshya (nnmod): Noun noun modifier/Sanyogmulak bisheshya (nnmod) refers to the relation between two nouns. Thetwo nouns, however, should not be in r6 relation.(36) rAjaputa jAti khuba yoddhA jAti <NULL>.[Rajput cast very warrior cast]The Rajputs are great warriors.nnmod(rAjaputa, jAti) nnmod(yoddhA, jAti)Adjective noun modifier/Bisheshyer bisheshan (jnmod): Adjective noun modifier/Bisheshyer bisheshan (jnmod) is used to relate an adjective which modifies themeaning of a noun or an adjective. For the purpose of simplification, quantifier,numerical modifier etc. have been included into this relation.(37) se uchcha phalanashIla bIja diYe chASha kare .[he high productive seed by cultivation do-pre]He cultivates using high productive seeds.jnmod(uchcha, phalanashIla) jnmod(phalanashIla, bIja)Demonstrative noun modifier/Nirnay suchak sarbanam (dnmod): Demonstrativenoun modifier/Nirnay suchak sarbanam (dnmod) is the relation between the head ofa noun phrase and its demonstrative.(38) sei meYeTA nAcha karate pAre nA.[that girl dance do-inf can-pre no]That girl can not dance.dnmod(sei, meYeTA)Pronominal noun modifier/Sarbanamjata bisheshan (pronmod): Pronominal nounmodifier/Sarbanamjata bisheshan (pronmod) accounts for the relation between a noun ora personal pronoun and a reflexive pronoun which functions as an emphatic pronoun.(39) sbaYa.n sbAmIji ei kathA bishbAsa karena.[himself Swamiji this word belief do-pre]Swamiji himself believes this fact.pronmod(sbaYa.n, sbAmIji)Participial noun modifier/Kridanta bisheshan (pnmod): Participial noun modifier/Kridanta bisheshan (pnmod) is the relation between a participial verb and a noun.The participial verb, here, modifies the noun.(40) bAire rAkhA kApa.Dagulo bRRiShTite bhije gela.[outside-loc keep-par cloth-pl rainloc wet-nf go-past]The clothes kept outside became wet in rain.pnmod(rAkhA, kApa.Dagulo)Appositional noun modifier/Tulyarupe sthapita bisheshan (anmod): Appositionalnoun modifier/Tulyarupe sthapita bisheshan (anmod) relates a noun phrase withanother immediately following</s> |
<s>noun phrase, both indicating the same person orthing.A dependency annotation scheme for Bangla treebank123(41) manamohana si.nha, bhAratera pradhAnamantrI, bidesha yAchchhena.[Manmohan Singh , India-gen prime–minister abroad go-pre,prog]Manmohan Singh, the Prime minister of India, is going to abroad.anmod(pradhAnamantrI, manamohana)Adverbial modifier/Kriya bisheshan jatiya bisheshan (adv): Adverbial modifier/Kriya bisheshan jatiya bisheshan (adv) relates an adverb with a verb.(42) rimi shAntabhAbe galpa balala.[Rimi quietly story tell-past]Rimi told the story quietly.adv(shAntabhAbe, balala)Verb-verb modifier/Kriya jatiya bisheshan (vmod): Verb-verb modifier/Kriya jatiyabisheshan (vmod) is the relation between two verbs indicating two sequential orparallel actions.(43) bRRiShTite pukurera mAchha nadIte giYe pa.Dala.[rain-loc pond-gen fish river-loc go-nf fall-past]Due to rain, the pond fishes got into the river.vmod(giYe, pa.Dala)Negation modifier/Namarthak abyay (neg): Negation modifier/Namarthak abyay(neg) is the relation between a negation word ‘nA’ and ‘ni’ and the verb it modifies.(44) TinA bA.Di yAbe nA.[Tina home go-fut no]Tina will not go home.neg(nA, yAbe)Adjectival complement/Bidheya bisheshan (acomp): Adjectival complement/ Bid-heya bisheshan (acomp) is the relation between an adjective and a verb where theadjective is a complement to the verb.(45) ei mandiraTi khuba prAchIna <NULL>.[this temple-sp very old]This temple is very ancient.acomp(prAchIna, <NULL>)Question words are tagged according to their usage in an interrogative sentence.How question words act in sentences are shown below.• ‘ke’ (who-singular), ‘ki’ (what), ‘kArA’ (who-plural), ‘konaTA’ (which) etc. actas karta.• ‘ki’ (what), ‘kAke’ (whom), ‘kAdera’ (whose-plural), ‘konaTA’ (which) etc. askarma.• ‘kena’ (why) is used as reason (rh) or purpose (ru).• ‘kakhana’ (when), ‘kothAYa’ (where) etc. act as adhikaran.• ‘kata’ (how much) is tagged as acomp or jnmod.• ‘ki’ (what) is sometimes added with yes/no type question and is tagged asparticle (par).S. Chatterji et al.123• When the question words ‘ki’ (what) and ‘kata’ (how much) are related with thenoun then the dnmod and jnmod tags are used, respectively.5.4 Few other interchunk relationsSome other interchunk relations are defined below.Conjunct/Samyojak abyay (ccof): Conjunct/Samyojak abyay (ccof) conjoinsbetween two independent words, phrases or clauses using the conjunct words‘eba.n’ (and), ‘kintu’ (but), ‘o’ (and), etc. or using the conjunct symbols ‘,’, ‘-’ etc.(46) darshaka upache pa.Dla nA kintu udaghATita hala eka natuna dika.[audience overflow-nf fall-past no but reveal-par be-past one new direction]The audience was not overflowing, but a new direction was revealed.ccof(upache, kintu) ccof(hala, kintu)Preconjunct/Abasthatmak abyay (pcc): Preconjunct/Abasthatmak abyay (pcc)relation is tagged between a subordinating conjunction and a verb. Thesubordinating conjunction may be in the preconjunct or in the postconjunctconstruction.(47) yadio Ami bAraNa karalAma tabuo se gela.[though I forbid do-past still he go-past]Though I forbade him, still he went.pcc(gela, tabuo) pcc(karalAma, yadio)Address word/Sambodhan sabda (rad): Address word/Sambodhan sabda (rad) is therelation between the address word and the verb. Address word is used when a personis addressed to.(48) sImA, tomAra yAoYA chalabe nA.[Sima, you-gen go move-fut no]Sima, you ought not to go.rad(sImA, chalabe)Particle/Bakyalankar abyay (par): Particle/Bakyalankar abyay (par) is the relationbetween the particle and the verb.(49) nyAyya kAraNa kintu nei.[logical reason but be-pre,neg]But there is no logical reason.par(kintu, nei)Question mark/Prashnabodhak chihna (qs): Question mark/Prashnabodhak chihna(qs) relation is used between the question mark (?) of the interrogative sentence andthe verb.(50) tomAra kAke beshi pachhanda <NULL>?[you-gen who-acc more like]A dependency annotation scheme</s> |
<s>for Bangla treebank123Whom do you like most?qs(?, <NULL>)End/Samapti (end): End/Samapti (end) is used to indicate the relation between non-interrogative sentence markers (j and !) and the verb of the main clause.(51) ete jami kama naShTa haYa.[it-loc land less spoil be-pre]For this, misuse of land is minimized.end(., haYa)Symbol/Chihna (sym): Symbol/Chihna (sym) is the relation between a symbolexcept the sentence end markers and those which are used to join two independentclauses and the verb.(52) rohana, tui chA khete yA.[Rohan, you tea eat-inf go-pre]Rohan, go and have tea.sym(,, yA)Dependent clauses are attached to the main clause with subordinating conjunc-tions. When the subordinating conjunction comes before the dependent clause, thenit is called preconjunct. When the subordinating conjunction comes after thedependent clause, then it is called postconjunct. If a single clause sentence containsa subordinating conjunction, then it is tagged as particle. It is generally placed at thebeginning of the sentence.5.5 Interclause relationsInterclause relations include relative and complement clause constructions wherethere is a main clause and at least one dependent clause. In relative clauseconstruction the dependent clause is connected with the main clause by a referent. Insome cases a coreferent (in accordance with the referent) may also occur in theinitial position of the main clause.Referent/Nirdesak (ref): Referent/Nirdesak (ref) relation is found between twoclauses when a coreferent also surfaces. The role of the dependent clause is the sameas the coreferent in the main clause. The ref relation is assigned between the verb ofthe dependent clause and the coreferent.(53) yakhana bRRiShTi pa.Dachhila takhana se chhAtA niYe gela.[when rain fall-past then he umbrella take-nf go-past]He took an umbrella when it was raining.ref(pa.Dachhila, takhana)Clausal Star/Bakyamsha samagra (clausal*): Clausal Star/Bakyamsha samagra(clausal*) relation is found between two clauses when the coreferent does notsurface, and the relation is assigned between the verb of the dependent clause andthe verb of the main clause. Here, ‘*’ is a variable indicating different dependencyrelations such as k1 (karta), k2 (karma), k3 (karan), k5 (apadan), rh(hetu), k7pS. Chatterji et al.123(deshadhikaran), k7t (kaladhikaran), etc. The ‘*’ indicates the dependency relationbetween the two clauses. For simplicity, we have not considered subdivisions ofkaraks (except adhikaran) in this variable position.(54) yakhana bRRiShTi pa.Dachhila se chhAtA niYe gela.[when rain fall-past he umbrella take-nf go-past]He took an umbrella when it was raining.clausalk7t(pa.Dachhila, niYe)(55) ye ba;iTA Ami niYechhilAma Ami pherata diYe diYechhi.[which book-sp I take-past I return give-nf give-pres,per]I have returned the book which I took.clausalk2(niYechhilAma, diYe)Clausal complement/Bakyamsha sampurak (clausalcomp): In a complement clauseconstruction, the dependent clause is connected with the main clause by acomplementizer. The dependent clause is considered as clausal complement of themain clause. The verb of the main clause and the verb of the complement clause areconnected by Clausal complement/Bakyamsha sampurak (clausalcomp) dependencyrelation.(56) se balala ye se kAla kolakAtA yAbe.[he tell-past that he tomorrow Kolkata go-fut]He said that he will go to kolkata tomorrow.clausalcomp(yAbe, balala)Complementizer/Sampurak (comp): In the complement clause construction, Com-plementizer/ Sampurak (comp) relation is between the complementizer and the verbof the complement clause. The complementizer which is the introducer of thecomplement clause acts as a connector between the main clause</s> |
<s>and the dependentclause. It is attached to the verb of the complement clause. In our analysis, Banglacomplementizers are ‘ye’ (that), ‘bale’ (that) or a comma. In example (56) ‘ye’ isrelated with the verb ‘yAbe’ by comp dependency relation.Apart from being a complementizer, ‘bale’ (due to) can also indicate a reasonclause (clausalrh) in multiclause Bangla sentences. It can also be used as a particle,or a verb. Similarly, ‘ye’ (that or who) can also be a demonstrative pronoun (asshown in example (55)) or a personal pronoun which acts as referent in thesubordinate clause. ‘ye’ can also be used as a particle.6 Analysis of the scheme and the annotation processWe have used the proposed dependency annotation scheme to build KGPBenTree-bank3. Then we have analyzed both the annotation scheme and the resource using3 The annotation has been done using the Sanchay annotation tool of Singh (2011)A dependency annotation scheme for Bangla treebank123standard analysis methods used in many other Treebanks. These analyses show theusability of the resource for other research purposes.6.1 Corpus statisticsWe have used the dependency relations to annotate a treebank of 4167 sentences(56,514 words). The sentences are taken from Blogs, Multikulti (www.multikulti.org.uk), Wikipedia and a portion of CIIL corpus. Each lexical item is annotatedmanually in two different levels namely lexical category and morphological valuesas a part of the Part-of-Speech Tagset (IL-POST) project undertaken by MSR India.This two level manually annotated sentences are used as input to our annotationtask. The distribution of the length (number of words) of the sentences are plotted inFig. 2. The length varies from 3 to 67 words, while 93.95 % sentences (3913sentences) have length between 4 and 29.6.2 Tagset overviewWe have sorted the relations based on their number of occurrences in theKGPBenTreebank. Some relations from the top and bottom positions of the list andtheir number of occurrences are shown in Table 2. Maximum number of usage isfound for ccof relation as there are two ccof tags from the conjunct to each of theparts which are joined by the conjunct. Next highest number of occurrence is foundfor jnmod due to its usage for different types of modifiers like quantifier, numericalmodifier, etc. Number of occurrences of pof indicates the number of complexpredicates in the corpus. There are 1039 occurrences of clausalk2. k5d, clausalk7pand clausalk7d have limited usage in Bangla.The distribution of the occurrence of dependency relations in the annotatedcorpus is shown in Fig. 3. Among 63 dependency relations 46 relations have beenused less than 1,000 times and 10 relations have been used more than 2000 times.So, relations of the tagset have been used to capture the relations of small number ofword pairs. 100 150 200 250 300 350 400 0 10 20 30 40 50 60 70 80Lengh of SentenceFig. 2 Number of sentences with their lengthS. Chatterji et al.123http://www.multikulti.org.ukhttp://www.multikulti.org.uk6.3 Inter-annotator agreementThree annotators have annotated KGPBenTreebank for 3 years. To calculate theagreement between the annotators, a part of KGPBenTreebank containing 485sentences have been annotated independently by all of them. We calculate the inter-annotators agreement using Fleiss’ Kappa as discussed in Fleiss</s> |
<s>(1971).Fleiss’ Kappa gives a measure (between 0 and 1) of agreement for more than 2annotators. It is a generalization of the Scott’s pi statistics of Scott (1955) for inter-annotator readability. Fleiss’ Kappa is calculated as follows.Suppose N is the total number of words, n is the number of annotators assigningdependency relations to the words and K is the number of dependency relations usedfor assignment. N � n is the total number of assignments of relations made by theannotators. Let the subscript i, where i ¼ 1; . . .;N, represent the words and thesubscript j, where j ¼ 1; . . .;K represents the dependency relations. So, nij is thenumber of annotators assigning ith word to jth dependency relation. The proportionof all assignments used for assigning jth dependency relation may be defined usingEq. (1). The mean proportion of assignments on all dependency relations may bedefined using Eq. (2).Table 2 Top members with highest and lowest number of occurrencesTop members No occurrences Bottom members No occurrencesccof 5025 k5d, clausalk7p 1jnmod 4952 clausalk7d 3k1g 4312 anmod 14end 4006 rasneg 15r6 3397 k1p 26nnmod 3143 k5p 27k2t 3098 k5t 35pof 2562 compr 38Fig. 3 Distribution of occurrences of the dependency relationsA dependency annotation scheme for Bangla treebank123Pj ¼N � ni¼1nij ð1ÞPe ¼j¼1j ð2ÞAmongnðn�1Þpairs of annotators, the extent to which the annotator pairs are agreedfor the ith word is defined using Eq. (3). The mean of agreements for all words maybe defined using Eq. (4).Pi ¼nðn� 1Þj¼1nijðnij � 1Þ ð3ÞP ¼ 1i¼1Pi ð4ÞThe degree of agreement between n annotators is computed in terms of Fleiss’Kappa (j) using Eq. 5.j ¼ P� Pe1� Peð5ÞWhen the annotators are agreed on all assignments, then j ¼ 1. The calculatedFleiss’ Kappa for the three annotators on 485 sentences containing 6356 words iscalculated to be 0.9288. So, the annotators have almost perfect agreement whileannotating KGPBenTreebank.6.4 Length of dependenciesThe average length of dependency relations is 2.82 while the longest and shortestdependency relations are clausalcondition and qsk2 and their average lengths are8.35 and 1, respectively.In KGPBenTreebank, among 56,524 words, 4167 words (one word from eachsentence) are the roots of the sentences. Root words of the sentences are notattached to any other word. The remaining 52,357 words of KGPBenTreebank areattached to some other words of the sentence. Among them, 28876 words are relatedto adjacent words (previous or next word), 8290 words are related to the words withdistance 2 (previous to previous or next to next word), and so on. The increasinglength of the dependency relations is plotted against the log of the number of wordswhich are attached to correspondingly distant words in Fig. 4. First two points of theplot are (X=1, Y=log10 (28876)) and (X=2, Y=log10 (8290)).S. Chatterji et al.123From this plot we observe that relations in KGPBenTreebank are mostly shortdistance relations and as the length of the relations increases the log of number ofwords related to the word with that distance decreases almost exponentially.7 Comparison between Hindi and Bangla schemesThe dependency annotation model discussed in this paper borrows from thePaninian grammatical</s> |
<s>model of Bharati et al. (1999), Hindi dependency scheme ofSharma et al. (2007) used in Anncorra, the typed dependency relation model deMarneffe and Manning (2008) followed in Stanford Parser v1.6.9 and the modernBangla grammatical model discussed by Chatterji 2003 and Bamandev Chakravarty2010.In both KGPBenTreebank and Anncorra, Karak relations are divided intosubcategories (finer classes). The differences of these division schemes arementioned in Sect. 7.1. A detailed explanation on the differences between themis given in Sect. 7.2.7.1 Differences of Karak division schemesIn KGPBenTreebank, five finer divisions of subjective case (karta karak) are usedon the basis of the type of the activity of the subject.1. doer subject (Kriya sampadak karta)2. experiencer subject (Anubhab karta)3. passive subject (Paroksha karta)4. noun of proposition (Bidheya karta or Samanadhikaran)5. general subject (Sadharan karta)In Anncorra, six finer divisions of subjective case (karta karak) are used.1. doer (karta)2. causer subject (prayojaka karta) 0.5 1.5 2.5 3.5 4.5 0 10 20 30 40 50 60 70ttaithLength of Dependency RelationsFig. 4 Distribution of Words based on the length of their head dependentsA dependency annotation scheme for Bangla treebank1233. causee subject (prayojya karta)4. mediator causer (madhyastha karta)5. noun complement of subject (karta samanadhikarana)6. clausal subjectIn KGPBenTreebank, five finer divisions of objective case (karma karak) areused.1. transitive object (sakarmak karma)2. direct object (mukhya karma)3. indirect object (gauna karma)4. purposive object (uddyeshya karma)5. predicative object (bidheya karma)In Anncorra, four finer divisions of objective case (karma karak) are used.1. direct object (mukhya karma)2. indirect object (gauna karma)3. object complement (karma samanadhikaran) [In KGPBenTreebank it occurs aspurposive object, predicative object and complement clause.]4. goal, destination (k2p) [In KGPBenTreebank we have used it as other cases-alike relation.]In KGPBenTreebank, sampradan karak is not considered and is merged withobject/karma.There are two finer divisions of sampradan (k4) in Anncorra.1. recipient (sampradan)2. experiencer karta (anubhava karta)In KGPBenTreebank, place and time are tagged differently. Therefore, here,there are four finer divisions of ablative case (apadan karak).1. place related ablative (sthanbachak apadan)2. state related ablative (abasthabachak apadan)3. time related ablative (kalbachak apadan)4. distance related ablative (duratbabachak apadan)In Anncorra, ablative case (apadan karak) has two finer divisions.1. apadan karak/source which is related to both place and time.2. prakriti apadan ’source material’ in verbs denoting change of state.In KGPBenTreebank, locative case (adhikaran karak) has four finer divisions asdomain and state or condition are treated in different way.1. place related locative (deshadhikaran)2. time related locative (kaladhikaran)3. domain related locative (bishayadhikaran)4. state related locative (bhabadhikaran)S. Chatterji et al.123In Anncorra, locative case (adhikaran karak) has three finer divisions.1. place related locative (deshadhikaran)2. time related locative (kaladhikaran)3. domain related locative (vishayadhikaran)The relations used for connecting two clauses in KGPBenTreebank are givenbelow. Here, ‘*’ can be replaced by the relation of the two clauses.1. referent (nirdesak)2. clausal* (bakyamsha samagra)3. clausal complement (bakyamsha sampurak)4. complementizer (sampurak)Anncorra has three types of relative clauses.1. nmod_relc (relative clause constructions modifying a noun)2. rbmod_relc (jo-vo construction modifying an adverb)3. jjmod_relc (jo-vo clause construction modifying an adjective)7.2 Overall differences of annotation schemesExtensive work has been done on developing Hindi treebank. The dependencyannotations scheme of Sharma et al.</s> |
<s>(2007), Begum et al. (2008) are used forannotating Anncorra. We have borrowed some features of these annotation schemesin the proposed Bangla dependency annotation scheme. Therefore, we discuss thedifferences between the existing Hindi annotation schemes and the schemeproposed in this paper.Each difference is explained with an example and a dependency structure. Theattachments and tags used by Anncorra which are different from KGPBenTreebank,are shown using thick lines and boldface in the dependency structures. Theintrachunk relations of KGPBenTreebank are not included in Anncorra chunk leveldependency treebank. The intrachunk attachments are shown using dotted lines andintrachunk relations are underlined.• Unlike Hindi treebank, causative subject [prayojaka karta] (pk1), mediator causer[madhyastha karta] (mk1), causee subject [prayojo karta] (jk1) are not used inKGPBenTreebank. Rather we considered pk1 as karta, mk1 as karan (as both of themhave similar syntactic structure), and jk1 as karma. If there is another karma in thesentence then jk1 has been considered as mukhya karma and the other one as gaunakarma. Suniti Kumar Chatterji Chatterji (2003) and Bamandev Chakravarty (2010)have given similar explanation in this concept. The difference is shown with thedependency structure of example 57 in Fig. 5.(57) sitA AYA dbArA bAchchAke khAbAra khAoYAchchhe. (Bangla)[Sita nurse by child-acc food feed-pre,prog]sitA AyA se bachche ko khAnA khilA rahI hai. (Hindi)Sita is feeding food to child by the nurse.A dependency annotation scheme for Bangla treebank123• Following the opinion in Chatterji (2003) we have merged recipient (sampradankarak) relation with the object (karma karak) relation as explained in Sect. 5.2.See dependency structure of example 58 in Fig. 6a.(58) rAma mohanake kShIra dila. (Bangla)[Ram Mohan-acc rice-pudding give-past]rAma mohana ko kShIra dI. (Hindi)Ram gave rice-pudding to Mohan.• Prati upapad ‘direction’ (rd) relation of Hindi treebank is not used in theKGPBenTreebank. We have considered that the direction towards it is moving isthe place where the subject wants to go or reach. Therefore, the correspondingrelation is treated as destination/gantabyasthal (des). See dependency structureof example 59 in Fig. 6b.(59) sitA grAmera dike yAchchhila. (Bangla)[Sita village-gen direction-loc go-past,prog]sitA gA.Nba kI aura jA rahI thI. (Hindi)Sita was going towards her village.• In respect of locative case, Anncorra treats both domain and state (or conditionof thing or mind) as vishayadhikaran/location. However, KGPBenTreebank hastwo different tags for these two different cases. See dependency structures ofexamples 60 and 61 in Fig. 7a, b, respectively.(60) tArA sukhe Achhe. (Bangla)[he-pl happy-loc be-pre]be khusha hai. (Hindi)They are living with happiness.(61) se sa.ngIte pAradarshI <NULL>. (Bangla)[he song-loc expert]bo gAne me mAhira hai. (Hindi)He is an expert in song.Fig. 5 Dependency Tree of example 57S. Chatterji et al.123• Relation point of time (rpt) relation is used in Anncorra to connect the starting pointof the time or place with the the ending point of the time or place. InKGPBenTreebank, the starting and ending point of times are considered as twodifferent karaks. The starting point is tagged as k5t and the ending point as k7t.Similarly, the starting and ending point of places are tagged as k5p and k7p,(a) (b)Fig. 6 a Dependency Tree of example 58. b Dependency Tree of example 59(a) (b)Fig. 7</s> |
<s>a Dependency Tree of example 60. b Dependency Tree of example 61(a) (b)Fig. 8 a Dependency Tree of example 62. b Dependency Tree of example 63A dependency annotation scheme for Bangla treebank123respectively. This difference is shown using the dependency structures of example62 in Fig. 8a.(62) 1990 sAla theke 2000 sAla paryanta bhAratera unnati druta chhila.(Bangla)[1990 year from 2000 year till India-gen development fast be-past]sana 1990 se 2000 taka bhArata kI pragati teja rahI. (Hindi)During the period from 1990 to 2000 Indias development was rapid.• In Anncorra, the intensifiers modifying adjectives are tagged as jjmod. Whereasin KGPBenTreebank, both the relations between the two adjectives and betweenthe adjective and noun are denoted by the same tag jnmod. See dependencystructures of example 63 in Fig. 8b.(63) gA.Dha nIla ba;i (Bangla)[deep blue book]gaharA nIlI kitAba (Hindi)Deep blue book.• In Anncorra, karta is inserted in recoverable condition i.e., when it is notpresent in the surface but is recoverable from other parts of the sentence. A<NULL> node is created to represent the absent karta. But, in KGPBenTree-bank, karta is not inserted in any condition. See dependency structures ofexample 64 in Fig. 9.(64) rAma o shyAma hoTele khAbAra khela Ara sinemA dekhala.(Bangla)[Ram and Shyam hotel-loc food eat-past and movie see-past]rAma aura shyAma hoTela para khAnA khAyA aura sinemA dekhA.(Hindi)Ram and Shyam had food in the hotel and watched a movie.Fig. 9 Dependency Tree of example 64S. Chatterji et al.1238 ConclusionThe present paper is on building a dependency annotation scheme for modernBangla language. The relationship set in the proposed scheme is categorized into 3levels: intrachunk relations, interchunk relations and interclause relations. Thoughmost of the relations in the treebank have syntactic orientation, semantic relationsalso have been considered in some cases where the need is felt.The scheme has been created with the help of traditional Bangla grammar books,the existing schemes for Indian languages and the modern Bangla grammar books.Further, the scheme is corrected and enriched during the annotation process with thehelp of the annotators and some Bangla language experts. Then the annotatedcorpus is corrected based on the modified tagset. Further study would be moreeffective for the improvement or enrichment of the scheme.Three annotators have electronically annotated 4167 Bangla sentences to test theproposed scheme. The inter-annotator agreement value on 485 Bangla sentences is foundto be 0.9288 in terms of Fleiss’ Kappa which indicates the consistency of the treebank.However, analysis is required to find and correct the mistakes in the annotation.Appendix 1: The Relation set of the Bangla TreebankIntrachunk relationsppl Postposition/Anusarga Rel. with noun/pron.stc Spatio-temp. con./Sthan-samay. samp. Re. with space-time nounvx Auxiliary verb/Sahayak kriya Related with verbpof Part of/Kriya antargata bisheshya ’’redup Reduplication/Shabda dbaita Rel. with same rhym.frag Fragment/Bhagnamsha Related with suffixKarak relationsk1d Doer subject/Kriya sampadak karta Related with verbk1e Experiencer subject/Anubhab karta Related with verbk1p Passive subject/Paroksha karta Related with verbk1s Noun of proposition/Samanadhikaran Related with verbk1g General subject/Sadharan karta Related with verbk2t Transitive object/sakarmak karma Related with verbk2m Direct object/Mukhya karma Related with verbk2g Indirect object/Gauna karma Related with verbk2u Purposive object/Uddyeshya karma Related with verbk2s Predicative object/Bidheya karma Related</s> |
<s>with verbk3 Instrumental/Karan Related with verbk5p Place rel. ablative/Sthanbachak apadan Related with verbk5s State rel. ablative/Abasthabachak apadan Related with verbk5t Time rel. ablative/Kalbachak apadan Related with verbk5d Dist. rel. ablative/Duratbabachak apadan Related with verbA dependency annotation scheme for Bangla treebank123k7p Place rel. locative/Deshadhikaran Related with verbk7t Time rel. locative/Kaladhikaran Related with verbk7d Domain rel. locative/Bishayadhikaran Related with verbk7s State rel. locative/Bhabadhikaran Related with verbrh Reason/Hetu Related with verbru Purpose/Uddeshya Related with verbdes Destination/Gantabyasthal Related with verbr6v Possession/Dakhal Related with verbcompr Comparison/Taratamya Related with anysim Similarity/Sadrishya Related with anyModifier Relationsr6 Genitive/Sambandha Related with nounras Associative relation/Saharthak sambandha Related with nounrasneg Non-associative relation/Namarthak sambandha Related with nounnnmod Noun noun modifier/Sanyogmulak bisheshya Related with nounjnmod Adj. noun mod./Bisheshyer bisheshan Related with noundnmod Dem. noun mod./Nirnay suchak sarbanam Related with nounpronmod Pron. noun mod./Sarbanamjata bisheshan Related with nounpnmod Participial noun mod./Kridanta bisheshan Related with nounanmod App. noun mod./Tulyarupe sthapita bisheshan Related with nounadv Adv. mod./Kriya bisheshan jatiya bisheshan Related with verbvmod Verb-verb modifier/Kriya jatiya bisheshan Related with verbneg Negation modifier/Namarthak abyay Related with verbacomp Adjectival Complement/Bidheya bisheshan Related with verbFew other interchunk relationsccof Conjunct/Samyojak abyay Rel. with conjunctpcc Preconjunct/Abasthatmak abyay Related with SC.rad Address word/Sambodhan sabda Related with verbpar Particle/Bakyalankar abyay Related with verbqs Question mark/Prashnabodhak chihna Related with verbend End/Samapti Related with verbsym Symbol/Chihna Related with verbInterclause relationref Referent/Nirdesak Rel. with noun/pronclausal* Clausal star/Bakyamsha samagra Related with verbclausalcomp Clausal complement/Bakyamsha sampurak Related with verbcomp Complementizer/Sampurak Related with verbrel.-related, pron.-pronoun, rhym.-rhyming word, mod.-modifier, adj.-adjectival, dem.-demonstrative,app.-appositional, adv.-adverbial, nom.-nominal, bish.-bisheshan, comp.-comparison, sim.-similarity,SC.-subordinating conjunction, temp.-Temporal, con.-Connection, samay.-Samaygata, samp.-SamparkaS. Chatterji et al.123Appendix 2: Itrans to glyphs in Bangla and Hindi scripts mappingA dependency annotation scheme for Bangla treebank123ReferencesBegum, R., Husain, S., Dhwaj, A., Misra, D., Bai, L., & Sangal, R. (2008). Dependency annotationscheme for indian languages. In Proceedings of the third international joint conference on naturallanguage processing(IJCNLP). Hyderabad, India.Bharati, A., Chaitanya, V., Sangal, R. (1999) Natural language processesing: A paninian perspective.New Delhi: Prentice-Hall of India.Bharati, A., Sangal, R., Chaitanya, V., Kulkarni, A., Sharma, D. M., & Ramakrishnamacharyulu, K. V.(2002). Anncorra: building tree-banks in indian languages. In Proceedings of the 3rd workshop onAsian language resources and international standardization (Vol. 12, pp. 1–8), COLING ’02.Bhatt, R., Narasimhan, B., Palmer, M., Rambow, O., Sharma, D. M., & Xia, F. (2009). A multi-representational and multi-layered treebank for hindi/urdu. In Proceedings of the third Linguisticannotation workshop, ACL-IJCNLP ’09, (pp. 186–189). Association for Computational Linguistics,Stroudsburg, PA, USA. URL http://dl.acm.org/citation.cfm?id=1698381.1698417Black, E., Eubank, S., Kashioka, H., Magerman, D., Garside, R., & Leech, G. (1996). Beyond skeletonparsing: Producing a comprehensive large-scale general-English treebank with full grammaticalanalysis. In Proceedings of the 17th international conference on computational linguistics(COLING-96), (pp. 107–112).Black, E. W., Garside, R., & Leech, G. N. (Eds.) (1993). Statistically-driven computer grammars ofEnglish: The IBM/Lancaster approach. No. 8 in Language and Computers. Amsterdam.http://books.google.de/books?id=Hkzr-LYVz2wC&lpg=PR5&ots=QJhw16OVS4&dq=Statistically-driven%20computer%20grammars%20of%20English&lr&pg=PP1#v=onepage&q&f=falseChakravarty, B. (2010). ‘‘uchchatara bangla vyakaran’’, a complete text book on higher bengaligrammar. Akshay Malancha.Charniak, E., Blaheta, D., Ge, N., Hall, K., Hale, J., & Johnson, M. (2000). Bllip 1987–89 wsj corpusrelease 1. Linguistic Data Consortium.Chatterji, S., Sarkar, T. M., Sarkar, S., & Chakraborty, J. (2009).</s> |
<s>Karak relations in bengali. InProceedings of 31st All-India conference of Linguists (AICL 2009), (pp. 33–36). Hyderabad, India.Chatterji, S. K. (2003). Bhasha-prakash bangala vyakaran [a grammar of the bangla language].Calcutta: Roopa and Company.Chopde, A. (2000). Itrans ‘‘indian language transliteration package’’, a package for printing text in indianlanguage scripts. http://www.aczone.com/itrans/.Dandapat, S., Sarkar, S., & Basu, A. (2004). A hybrid model for part-of-speech tagging and its applicationto bengali. In International conference on computational intelligence, (pp. 169–172).de Marneffe, M., & Manning, C. D. (2008). Stanford typed dependencies manual.Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin,76(5), 378–382.Hajič, J., Böhmová, A., Hajičová, E., & Vidová-Hladká, B. (2000). The prague dependency Treebank: Athree-level annotation scenario. In A. Abeillé (Ed.), Treebanks: Building and using parsed corpora(pp. 103–127). Amsterdam: Kluwer.Hajič, J., Hajičová, E., & Rosen, A. (1996). Formal representation of language structures. TELRINewsletter, 3, 12–19.Hajič, J., Vidová-Hladká, B., & Pajas, P. (2001). The prague dependency Treebank: Annotation structureand support. In Proceedings of the IRCS Workshop on Linguistic Databases, (pp. 105–114).Philadelphia, USA: University of Pennsylvania.Karlsson, F., Voutilainen, A., Heikkilä, J., & Anttila, A. (Eds.) (1995). Constraint Grammar: A language-independent system for parsing unrestricted text. Berlin: Mouton de Gruyter.Marcus, M.P., Marcinkiewicz, M.A., & Santorini, B. (1993). Building a large annotated corpus of english:the penn treebank. Computational Linguistics 19, 313–330. http://dl.acm.org/citation.cfm?id=972470.972475McCord, M. C. (1990). Slot grammar: A system for simpler construction of practical natural languagegrammars. In R. Studer (Ed.), Natural Language and Logic: Proceedings of the internationalscientific symposium, Hamburg, FRG, (pp. 118–145). Berlin: Springer.Palmer, M., Gildea, D., & Kingsbury, P. (2005). The proposition bank: An annotated corpus of semanticroles. Computational Linguistics, 31, 71–106. doi:10.1162/0891201053630264.Santorini, B., & Marcinkiewicz, M.A. (1991). Bracketing guidelines for the penn treebank project.unpublished manuscript.S. Chatterji et al.123http://dl.acm.org/citation.cfm?id=1698381.1698417http://books.google.de/books?id=Hkzr-LYVz2wC&lpg=PR5&ots=QJhw16OVS4&dq=Statistically-driven%20computer%20grammars%20of%20English&lr&pg=PP1#v=onepage&q&f=falsehttp://books.google.de/books?id=Hkzr-LYVz2wC&lpg=PR5&ots=QJhw16OVS4&dq=Statistically-driven%20computer%20grammars%20of%20English&lr&pg=PP1#v=onepage&q&f=falsehttp://www.aczone.com/itrans/http://dl.acm.org/citation.cfm?id=972470.972475http://dl.acm.org/citation.cfm?id=972470.972475http://dx.doi.org/10.1162/0891201053630264Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding. Public OpinionQuarterly, 19, 321–325.Sharma, D.M., Sangal, R., Bai, L., Begam, R., Ramakrishnamacharyulu, K. (2007). Anncorra : Treebanksfor Indian languages, annotation guidelines (manuscript).Singh, A. K. (2011). Part-of-speech annotation with sanchay. In Proceedings of the National Seminar OnPOS annotation for Indian Languages: Issues & Perspectives. Mysore, India.Xue, N., Xia, F., Chiou, F.D., Palmer, M. (2005). The penn chinese treebank: Phrase structure annotationof a large corpus. In Natural Language Engineering.A dependency annotation scheme for Bangla treebank123 A dependency annotation scheme for Bangla treebank Abstract Introduction Related work Categorization of the relation set Some salient features of the relationset and the KGPBenTreebank Syntactico-semantic relation set NULL node insertion Same tree for different sentences Ambiguity in the sentence Definition of Bangla dependency relations Intrachunk relations Case/Karak Subject/Karta Object/Karma Instrumental/Karan Ablative/Apadan Locative/Adhikaran Other case-alike/Anyanya karak-sama Modifier relations Few other interchunk relations Interclause relations Analysis of the scheme and the annotation process Corpus statistics Inter-annotator agreement Length of dependencies Comparison between Hindi and Bangla schemes Differences of Karak division schemes Overall differences of annotation schemes Conclusion Appendix 1: The Relation set of the Bangla Treebank Appendix 2: Itrans to glyphs in Bangla and Hindi scripts mapping References</s> |
<s>Design and Development of a Bangla Semantic Lexicon and Semantic Similarity MeasureInternational Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 Design and Development of a Bangla Semantic Lexicon and Semantic Similarity Measure Manjira Sinha Department of Computer Science and Engineering Indian Institute of Technology Tirthankar Dasgupta Department of Computer Science and Engineering Indian Institute of Technology Abhik Jana Department of Computer Science and Engineering Indian Institute of Technology Anupam Basu Department of Computer Science and Engineering Indian Institute of Technology ABSTRACT In this paper, we have proposed a hierarchically organized semantic lexicon in Bangla and also a graph based edge-weighting approach to measure semantic similarity between two Bangla words. We have also developed a graphical user interface to represent the lexical organization. Our proposed lexical structure contains only relations based on semantic association. We have included the frequency of each word over five Bangla corpuses in our lexical structure and also associated more details to words such as, whether the words are mythological or not, whether it can be used as verb or not, in order to use the word as a verb which word should be appended to it etc. As we have earlier discussed, this lexicon can be used in various applications like categorization, semantic web, and natural language processing applications like, document clustering, word sense disambiguation, machine translation, information retrieval, text comprehension and question-answering systems. General Terms Semantic lexicon development Keywords Bangla SynNet , Semantic Similarity, Category, Concept, Sub-concept, Cluster 1. INTRODUCTION The lexicon of a language is a collection of lexical entries consisting of information regarding words and expressions. According to Levelt [8] every lexical entry contains mainly two types of information namely, form and meaning that help a user to recognize and understand words. Form refers to the orthography, phonology and morphology of the lexical item and Meaning refers to its syntactic and semantic information. A lexicon is the central part of any natural language processing applications like, machine translation, language comprehension, language generation and information retrieval. Depending on the storage structure and the content, the type of lexicon varies. For example, dictionary, thesaurus, FrameNet, WordNet and ConceptNet are different type of lexicons having different lexical representation schemes. One of the most popular and commonly used lexicons in the present time is the WordNet that organizes words in terms of their senses. Given the importance and wide acceptability of WordNet in computational linguistics, several attempts have been made to develop such lexical representation schemes for many other languages1. Attempts have also been made to develop WordNet like lexical representation scheme in Indian languages. One of the widely known such work is the Hindi WordNet [30]. However, developing such a complex lexical representation scheme is not trivial. It not only requires an extensive linguistic expertise, but also manual encoding of the individual synsets need a huge time and manual effort. As a result of this, a lot of attempts are currently going on to develop semi-supervised algorithms to compute semantic distance between</s> |
<s>words. Bangla is an Indo-Aryan language. It is native to the region of eastern South Asia known as Bengal, which comprises present day Bangladesh, the Indian state of West Bengal, and parts of the Indian states of Tripura and Assam. It is written using the Bengali script. It has been estimated that about 230 million people in the world speaks Bangla which is the sixth most spoken language in the world2. Bangla is also the national language of Bangladesh. Despite being so popular, very few attempts have been made to build a semantically organized lexicon of substantial size in Bangla [16]. But, from the above discussions, it is evident that a semantic lexical representation is essential to the development of number of NLP applications in Bangla. In this paper we present the design and development of a Bangla lexicon that is based on the semantic similarity among Bangla words. The lexicon can be further used in various applications like as mentioned above. The design of this structure is based on Samsad Samarthak Sabdokosh [11]. The lexicon is based on a hierarchical organization where at the top there is a root node which is divided into different categories. The categories are divided into concepts. The concepts are divided sub-concepts which are further divided into clusters. The words are grouped into clusters along with their synonyms. Each category, concepts, sub-concepts and clusters are connected in terms of weighted edges. The weight denotes the semantic distance between the two nodes connected by an edge. All together the lexicon contains more than 50,000 unique Bangla words connected in terms of their semantic similarities. http://globalwordnet.org/ http://en.wikipedia.org/wiki/Bengali_languageInternational Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 Based on the hierarchical representation of our lexicon, we have developed a semantic similarity measure between Bangla words. The similarity measure was evaluated by a number of native Bangla speakers where we have achieved a significantly high accuracy. The rest of the paper is as organized follows: Section 2 contains background study. We have also pointed out some of the differences of our proposed structure with respect to WordNet in Section 2. Section 3 explains the design and implementation of the lexicon along with some details of basis of our lexicon. Section 4 describes the proposed approach of predicting semantic similarity between words; in Section 5, we have discussed about the evaluation of our proposed semantic similarity method. Finally, we conclude this paper in section 6. 2. BACKGROUND STUDY Plethora of works has been done developing semantic lexicons in various languages like, English, French, Dutch, German and Italian3. The efforts ranges from developing lexicons like, Dictionary, and Thesaurus, to more advance forms like, WordNet, CYC and others. Synsets are main building block of such lexical representations. A list of synonymous word forms a synset which are further connected in terms of different semantic relations like, Hyponym, Hypernym, Holonym, Metonym, and Meronym. These relations are nothing but semantic pointers. With respect to this, surprisingly, very few attempts have</s> |
<s>been taken to develop semantic lexicons in language like Bangla which is among top ten most spoken languages in the world. The Bangla WordNet project [4] is the only such attempt that aims to build a large scale semantic lexicon for Bangla words. However, at the present state the lexicon is reported to compose of around 36000 words as compared to our proposed lexicon of 50,000 words. Further, the structure is based on Bangla to English bi-lingual dictionaries and in strict alignment (only the synonym equivalents are used) with the Princeton WordNet for English. Our proposed lexical representation SynNet is different from WordNet in many aspects. Some of the important differences being, SynNet contains cross part-of speech links which are not present in WordNet; it contains semantic relations such as "actor"([book]-[writer]), "instrument"([knife]-[cut]); the links are weighted to indicate the measure of semantic similarity between any two pair of words and moreover, SynNet acts as an thesaurus for Bangla rather than like a Dictionary. 2.1 Works on measuring semantic similarity among words A number of approaches for measuring conceptual similarity have been taken in the past. Tversky’s feature based similarity model [20], is among the early works in this field. Some works [13,6,7] have proposed the conceptual distance approach that uses edge weights, between adjacent nodes in a graph as an estimator of semantic similarity. Resnick [14, 15] 3http://globalwordnet.org/ have proposed the information theoretic approach to measure semantic similarity between two words. Here, the class is made up of all words in a noun synset as well as words in all directly or indirectly subordinate synsets. Conceptual similarity between two classes is approximated by the information content of the first class in the noun hierarchy that subsumes both classes. Richardson et al. [16] has proposed an edge-weight based scheme for Hierarchical Conceptual Graphs (HCG) to measure semantic similarity between words. According to them, the weight of a link depends on three factors: the density of the HCG at that point, depth in the HCG and the strength of connotation between parent and child nodes. Efforts (JC 1997) have been made to combine both the information content based approach and the graph based approach of predicting semantic similarity. In addition, strategies of using multiple information sources to collect semantic information have also been adopted [9]. Wang [21] have criticized the traditional notions of the depth and density in a lexical taxonomy. They have proposed novel definitions of the depth and density which have found to give significant improvement in performance; they have also verified the results with human judgements. However, almost all of the attempts described above have been taken in English based on the representation of WordNet. Das and Bandopadhaya [3] have proposed a Semantic Net in Bangla, where the relations are based on human pragmatics. 3. OVERVIEW OF THE PROPOSED LEXICON We have taken the Samsad Samarthak Sabdokosh by Ashok Mukhopadhyay [4] as the basis for our proposed lexical representation in Bangla. In order to build up a semantic relation</s> |
<s>based lexical representation, we have constructed a hierarchical conceptual graph. An illustration of such a hierarchical representation scheme is depicted in Figure 1. International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 Figure 1: Schema of the proposed lexiconWe have classified each level of hierarchy in terms of “domains”, “concepts”, “sub-concepts”, and “clusters”. Accordingly, we have 30 different domains. Each domain is consisting of different concepts. The concepts are classified into sub-concepts. Different groups of words that are semantically related to a single sub-concept are organized together. Relevant information such as Part-Of-Speech (POS) corresponding to every word and antonyms for adjectives are also mentioned. Concepts are further classified into clusters. Each cluster is consisting of semantically similar words which are further grouped according to their degree of semantic similarity thus, making the whole structure hierarchical in nature. We have used different markers to separate out each cluster as well as words within each cluster. Each word in our lexicon is composed of the following 11-tupules: 1 Word 2 Corpus frequency: computed from a Bangla lexicon of 35 million words. 3 Cluster_id: Id of the cluster of which the word is a member 4 Part-Of-Speech (POS) 5 Concept_id: Id of the concept in which the word belongs 6 Sub-concept_id: Id of the sub-concept(if exists) in which the word belongs 7 Category_id: Id of the category in which the word resides 8 Myth: a flag to indicate any mythical relation to the word 9 Antonym: a flag to indicate whether the word is antonym of the concept or not 10 Is_collective: a flag to indicate whether the word is a collective noun or not 11 G_word: a pointer to the general word denoting the collection in which the present word belongs 12 Verb: a flag to indicate whether the word can be also used as a verb or not. 13 To_verb: contains a word which can be appended to the present word to make it possible to be used as a verb. The no of word can be more than one also. 14 Primary link 15 Secondary link In order to compute frequencies of each lexical item, we have prepared a Bangla corpuses composed of complete novel and story collection of Rabindranath Tagore, Bankimchandra Chattaopadhayay4, collection of Bangla blogs over the internet, Bangla corpus by CIIL Mysore5 and Anandabazar news corpus6. All together there are 35 million words from which we have prepared a list of around 4 lakh distinct words in Bangla with their corpus frequencies. Given a word, its frequency over the five mentioned corpuses, its associations with different categories or sub-categories are collected at a single place so that a user can navigate through the storages with low cognitive load. We have also rated the various types of connections among different levels of the graph and developed a mechanism for predicting semantic similarity measures between words in the proposed lexicon. It supports queries like DETAILS(X) (here X can be any type of node of the</s> |
<s>hierarchy) and SIMILARITY (WORD1, WORD2). Both the Rabindra Rachanabali and Bankim Rachanabali documents is collected from www.nltr.org 5 Downloaded from www.ciil.org 6 Downloaded from www.cel.iitkgp.ernet.in http://www.nltr.org/International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 3.1 The Primary and Secondary Links There are two types of cross links exists in our semantic lexicon - primary links, and secondary links which are the specified after words under clusters. The primary link refers to concepts or sub-concepts which are semantically very close to the word after which the link is specified for example the word গ্রহজগৎ/planetary system which is under মহাবিশ্ব/universe concept, has a primary link to the concept সূর্য/sun . The secondary link refers to concepts or sub-concepts which are somehow or in some generalized senses semantically related to the word after which the link is specified for example the word জজযাবিবিযদ্যা/astrology which is under concept গ্রহ/planet has a secondary link to the concept নক্ষত্র/star. Primary link is represented by special tags like, <\primary>. After <\primary> a concept number or a sub-concept number is given to which the word has primary link. Secondary link is represented by the tag <\secondary>. In this case within the tags a concept number or sub-concept number is given to which the word has secondary link. The number of primary link or secondary link can be more than one also for a particular word. An illustration of the primary and secondary links in our lexicon is shown in Figure 2. In figure 2 below, the category id of মহাবিশ্ব-প্রকৃবি-পৃবিিী-গাছপালা/ universe-nature-earth-flora is 1, মহাবিশ্ব/ universe has sub-category id 1.1 meaning it is the 1st sub-category of category 1 and বনবিলভুিন/ universe cluster id 1.1.1 as it belongs to the synonym cluster of 1.1. The member relations of words with their clusters have been shown in dashed lines and the round dotted line and the compound line indicate primary link and secondary link respectively. Figure 2: Hierarchical Representation of the Bangla Semantic Lexicon 3.2 Development of the Proposed Lexicon In order to build-up a semantic relation based lexical representation Bangla; we have constructed a hierarchical conceptual graph based on the above mentioned thesaurus. We have also individually processed and stored the distinct general words in the book along with their respective details. Our storage and organization of the database facilitate computational processing of the information and efficient searching to retrieve the details associated with any word. Therefore, it will be a useful resource and tool to other psycholinguistic and NLP studies in Bangla. Given a word, its associations with different categories or sub-categories are collected at a single place so that a user can navigate through the storages with ease. We have also rated the various types of connections among different levels of the graph and developed a mechanism for predicting semantic similarity measures between words in the proposed lexicon. The details of the organizational methodology are described below. As discussed earlier, the proposed lexicon contains words from 90 different domains. For example, মহাবিশ্ব/ universe,</s> |
<s>প্রকৃবি/ nature, পৃবিিী/earth, গাছপালা/flora, ইবিয়-অনভূুবি/sense-perception, কাল/time, ঋিু/season, and িয়স/ age are different domains. Each domain is a collection of concepts, for example, সুয়য is a concept under the domain মহাবিশ্ব. Moreover, সুয়য also belongs to the domain of প্রকৃবি/ nature as well as পৃবিিী/earth connected using the primary links. Till date, there are all together there are 757 unique concepts under the head of 90 domains. These concepts are divided into sub-concepts in some cases. The sub-concepts do not have any specific name. We have provided each sub-concept a unique id. The words (mainly nouns, pronouns, adjectives, adjective-nouns and verbal adjectives) have been distributed into separate clusters attached to the concepts or sub-concepts and they form the leaves of the hierarchy. There is a common root node as antecedent to all the categories. Corresponding to each concepts, there are two types of clusters: one contains the exact synonyms and the clusters of the other type contain related words or attributes. The words belonging to the same cluster are synonymous. For example, consider the concept “সাহবসকিা”. The lexical items under this concept like, সাহস ,বনভীকিা ,বনভয য়িা are all synonyms to each other. Therefore they form a cluster. On the other hand, lexical items like, দ্ুুঃসাহস ,ইচ্ছাশবি ,জিজ ,জশৌর্য ,িীর্য are although semantically related but not exactly synonymous to সাহবসকিা therefore they form a separate cluster in the lexicon. Moreover every concept contains a set of antonyms associated with them. The antonyms are situated within the same clusters where synonyms are present. However, they are separated from the synonym through a specific antonym marker. For example, জদ্শদ্রাহী/traitor is under the concept স্বদ্দ্শ/native land in adjective section with [/antonym] tag. Every category, concept and cluster has distinct identification numbers which are stored along with the lexemes for further processing. An illustration of the hierarchical representation of domainconceptssub-concepts and clusters is shown in Figure 2. In Figure 2, the category id of মহাবিশ্ব-প্রকৃবি-পৃবিিী-গাছপালা is 1, মহাবিশ্বhas sub-category id 1.1 meaning it is the 1st sub-category of category 1 and cluster id 1.1.3 as it belongs to the synonym cluster of 1.1. The member relations of words with their clusters have been shown in dashed lines and the round dotted line with arrowhead indicates that it is a primary link. Each of the concepts has their corresponding synonyms and similar or related words. Different groups of words that are associated with a concept are organized together. Relevant information such as part of speech (POS) is corresponding to every group of words which are specified by tags like বি. /Noun, বিণ. /Adjective etc. If any word is mythological word then a tag like [জপৌরা.] is specified before that word. Words having hyphen ‘-’ at its end can be used as verb; e.g. ধরা-/catch, জিলা-/play etc. There are some words which are nouns or adjectives but we can use those words as verbs by appending some words with those. In our corpus the words to be appended are specified after the main word within International Journal</s> |
<s>of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 parenthesis like বশকার (ক). বশকার/Hunt is basically a noun but we can use it as a verb by appending করা/do to it which is indicated by (ক). There are several tags like this e.g. (জদ্) indicates জদ্ওয়া, (হ) indicates {হওয়া} etc. In case of collective noun the collection of words are specified within square brackets separated by semicolon (;) after the word e.g. সপ্তপািাল [অিল ; বিিল ; সুিল ; িলািল ; মহািল ; রসািল ; পািাল] , here সপ্তপািাল is a collective noun .The most important thing in this corpus is that even if two words are orthographically, phonologically same but semantically different then no cross-reference occurs between them e.g. the word কলা means art; it is a type of fruit(banana) also; these two occurrences of word কলা do not have any cross links among each other as semantically these two senses are not close to each other. 4. MEASURING SEMANTIC SIMILARITY BETWEEN BANGLA WORDS Many approaches have been taken to measure the semantic similarity between categories or words (described in section Background Study) such as information theoretic approaches, graph-based approaches. Here, we have proposed a simple graph based semantic similarity measure on our proposed lexicon. We have also verified it with user feedbacks. Table 1: Edge Weight Distribution Sr. No. Type of link Link weight ( is a constant whose value can be adjusted accordingly) 1. member relation : between a cluster and a word under it 2. is-a relation : between a sub-concept and cluster under it 3. is-a relation : between a concept and sub-concept under it 4. is-a relation : between a concept and cluster under 5. is-a relation : between a category and concept under it 6. is-a relation : between the root and category under it 7. primary link : between a word and a concept(or a sub-concept) 8. secondary link : between a word and a concept(or a sub-concept In our proposed lexical representation, the nodes on the top represents generalized concepts and as one goes down the hierarchy the nodes represent more specialized concepts. Therefore, the distance between a category and one of its concepts is greater than that between a concept and one of its clusters or sub-concepts (if exists). To capture this in our similarity measure we have assigned edge-weights to represent the relative distances. There are 8 types of link in this organization. The assigned weights of those links are described in details in table 1. We have assumed that the all the nodes at a particular level are equal in weight. The semantic distance between any pair of words is measured by the shortest path distance between them- … 1 In Equation (1) is a constant signifying the scale of measurement. We have taken and , so that a pair of synonyms has a score of 10 out of 10. The distribution of edge weights in the lexicon are shown in 3. Therefore, from Equation (1),</s> |
<s>the semantic similarity values between different types of word pairs are as depicted in Table 2. Figure 3: Edge-weight Distribution in Lexicon International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 Table 2: Similarity Scores Case Path to traverse Score (in a scale of 10) both the words are in same cluster both the words are in same sub-concept, but in different clusters both the words are in same concept , but in different clusters both the words are in same category , but different concepts both the words are from different categories both the words are from different concepts, but connected through primary link to sub-concept both the words are from different concepts, but connected through primary link to concept both the words are from different concepts, but connected by secondary link to sub-concept Both the words are from different concepts, but connected by secondary link to concept 5. EVALUATION In order to evaluate our proposed semantic similarity measure, we have selected 400 different Bangla word pairs from our developed semantic lexicon. The selection of these word pairs were done in a pseudo-random manner. 300 word pairs were selected in a controlled manner. These word pairs were chosen from six different categories of relations in the following way: Category 1:50 pairs had both the words from the same cluster, i.e. synonyms. Category 2:50 pairs had words from different clusters of the same concept. Category 3:50 pairs had words from different concepts of the same category. Category 4:50 pairs had words from belonging from different categories. Category 5:50 pairs had words connected by primary links to concept. Category 6:50 pairs had words connected by secondary link to concept. Another set of 100 word pairs were randomly chosen from the lexicon. These word pairs may or may not have any semantic relationship among them. We have also chosen another set of 200 word pairs which do not share any semantic relationship among them. Altogether, 600 Bangla word pairs were selected for our evaluation purpose. 60 different native speakers of Bangla participated in the experiment with age between 23 years to 36 years. All of them hold a graduate degree in their respective fields and 10 have a post graduate degree. Each participant was provided the same set of 600 Bangla word pairs. The participants were asked to assign a score from 1 to 10 to each of the 300 word pairs based on their degree of relatedness: 1 for the lowest or no connectivity and 10 for the highest connectivity or synonyms. International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 5.1 Result and Discussion Figure 4: Performance analysis of user rating versus predicted measurePerceiving semantic similarity or relatedness between a pair of words or concepts denoted by them depends on cognitive skill, domain or language knowledge and background of the user. Corresponding to each of the six types of words taken for user study, we have</s> |
<s>calculated both median and mean of user ratings. Mean has been used because of its popularity and common use, but as mean is very sensitive to outliers or extreme values median has also been taken into account. The table 3 below shows the outcomes of the user validation. Figure 4 below demonstrates the results graphically, it can be easily seen that the user ratings and our proposed measure are very close to each other. Table 3: User Score versus Predicted Score Category 1 2 3 4 5 6 Median User Rating 8.5 6 3.59 1 7 5.5 Mean User Rating 8.6 5.89 2.38 1.25 6.34 4.94 Predicted similarity score 10 4.76 3.03 2.17 5.48 4.16 One interesting point to be noted here is that the overall mean and median of user ratings for category 1 is less than 10. This means synonyms are not always perceived as exactly similar to each other. Spearman’s rank correlation7 of the predicted semantic similarity measure with the median values of user scores corresponding to each of the 50 word pairs is 1. To depict the subjectivity of users’ perception, we have plotted the median values against our proposed scores (refer to section 5). As can be seen from the figure 5, there are few outliers in the dataset who have median values far from the group mean and median (type 1). Another type (type 2) of word pair is of interest as they have significant difference (greater than 1) between mean and median value, which http://en.wikipedia.org/wiki/Spearman's_rank_correlation_coefficient implies that user ratings contain some extreme values. The pairs belonging to each type are given below in table 4. Figure 5: Comparisons of ratings of individual pairs with proposed scores Table 4: List of Type1 and Type2 words Category Word-pair Type 1 দ্গুযা/Durga-ভগিিী/Bhagovati 2 2 রুবি/interests-রমণীয়/beautiful 3 িনযা/flood-পিযি/mountain 2 5 গ্রহজগৎ/planetary system-জসৌরদ্লাক/solar system 5 কৃবিজবম/farm land-ফসল/crop 2 2 নগ্নিা/naked-বিিস্ত্র/undressed 1 2 আলাদ্া/different-বিদ্ভদ্/discriminate 5 গমন/go, travel-র্াওয়া/departure 5 বশলািবৃি/hail- 1 International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 িরফপড়া/snowfall 6 ভরাদ্কাটাল/ high-tide-জলপ্লািন/flood 3 সাফলয/success- িযাবি/fame 1,2 6 বহমশশল/iceberg-নবুড়/pebbles 1,2 6 ক্রমশ/continued-মন্থরিা/slowness 1,2 As can be seen from the above table, word-pairs like (দ্গুযা/Durga—ভগিিী/Bhagovati) demands a certain level of knowledge about the mythology to be perceived as synonyms, therefore, the user scores corresponding to this kind of word pairs also vary from person to person. Again, the similarity for the word pairs (গ্রহজগৎ/planetary system- জসৌরদ্লাক/solar system) and (কৃবিজবম/farm land-ফসল/crops) depend on how a user connects the two concepts in her cognition. The type 1 word pairs such as (নগ্নিা/naked- বিিস্ত্র/undressed), (বশলািবৃি/hail-িরফপড়া/snowfall) and (সাফলয/success-িযাবি/fame) has been marked as synonyms or highly similar by the users. These phenomena demonstrate the confusion in distinguishing synonyms and very closely related concepts or words, especially those which are used alternatively in frequent situations. Three pairs belong to both types signifying they have been perceived as very close by most of the users and at the same time have got extreme values from the rest. 6. CONCLUSION AND FUTURE ASPECTS We</s> |
<s>have proposed here a hierarchically organized semantic lexicon in Bangla and also a graph based edge-weighting approach to measure semantic similarity between two Bangla words. The similarity measures have been verified using user studies. We have also developed a graphical user interface to represent the lexical organization. Our proposed lexical structure contains only relations based on semantic association. We have included the frequency of each word over five Bangla corpuses in our lexical structure and also associated more details to words such as, whether the words are mythological or not, whether it can be used as verb or not, in order to use the word as a verb which word should be appended to it etc. As we have earlier discussed, this lexicon can be used in various applications like categorization, semantic web, and natural language processing applications like, document clustering, word sense disambiguation, machine translation, information retrieval, text comprehension and question-answering systems. We can also use it as a tool to improve the readability of text. For example we can substitute those words which are not understandable by reader with some easy words from the same cluster so that the sense of the sentence remain same. We can also use it as a tool to increase anyone's vocabulary. In future, we will try to associate more details to words such as their pronunciations, distribution in spoken corpus, and word frequency history over time etc. We will tag specific relations between concepts or sub-concepts and clusters that mean how a cluster of words is related to a concept or sub-concept. We are thinking of annotating it manually. We will try to incorporate the information content of the words or other types of nodes in the similarity measure and subsequently verify them against user ratings as well as other automatic applications like text simplification, WSD etc. We still have to consider the relative difficulty of each word based on their corpus frequency or probability of occurrence. Also, all the clusters belonging to a common concept and all concepts descending from a common category have been assumed as equal. From the results of the user study it seems that there should be relative gradations of degree of similarity in these cases. We need to include these considerations in our measurement framework in order to achieve better correlation with users' cognitive perception. As evident from the users' feedbacks, the perception of semantic similarity between a pair of words varies largely according to user background. There should be an efficient mechanism to take into account the user's background. The present lexical structure contains static edges representing an ideal situation; a lexicon having dynamic connectivity can be helpful in understanding the effect of learning on the organization of mental lexicon. 7. REFERENCES [1] Aitchison, J. (2012). Words in the mind: An introduction to the mental lexicon Wiley-Blackwell. [2] Boyd-Graber, J., Fellbaum, C., Osherson, D., and Schapire, R. (2006). Adding dense, weighted connections to wordnet. In Proceedings of the Third International WordNet Conference, pages 29–36. [3]</s> |
<s>Das, A. and Bandyopadhyay, S. (2010). Semanticnet-perception of human pragmatics. In Proceedings of the 2nd Workshop on Cognitive Aspects of the Lexicon, pages 2–11, Beijing, China. Coling 2010 Organizing Committee. [4] Fellbaum, C. (2010). Wordnet.Theory and Applications of Ontology: Computer Applications, pages 231–243. [5] Jiang, J. and Conrath, D. (1997). Semantic similarity based on corpus statistics and lexical taxonomy. arXiv preprint cmp-lg/9709008. [6] Kim, Y. and Kim, J. (1990). A model of knowledge based information retrieval with hierarchical concept graph. Journal of Documentation, 46(2):113–136. [7] Lee, J., Kim, M., and Lee, Y. (1993). Information retrieval based on conceptual distance in is-a hierarchies. Journal of documentation, 49(2):188–207. [8] Levelt, W. (1989). Speaking: from intention to articulationmit press. Cambridge, MA. [9] Li, Y., Bandar, Z., and McLean, D. (2003).An approach for measuring semantic similarity between words using multiple information sources.Knowledge and Data Engineering, IEEE Transactions on, 15(4):871–882. [10] Liu, H. and Singh, P. (2004).Conceptnet—a practical commonsense reasoning tool-kit.BT technology journal, 22(4):211–226. [11] Mukhopadhyay, A. (2005). Samsad Samarthaksabdokosh. SahityaSamsad, 12 edition. [12] Muller, S. (2008).The mental lexicon. GRIN Verlag. [13] Rada, R., Mili, H., Bicknell, E., and Blettner, M. (1989).Development and application of a metric on semantic nets. Systems, Man and Cybernetics, IEEE Transactions on, 19(1):17–30. [14] Resnik, P. (1993a). Selection and information: a class-based approach to lexical relationships. IRCS Technical Reports Series, page 200. International Journal of Computer Applications (0975 – 8887) Volume 95– No.5, June 2014 [15] Resnik, P. (1993b). Semantic classes and syntactic ambiguity. In Proc. of ARPA Workshop on Human Language Technology, pages 278–283. [16] Richardson, R., Smeaton, A., and Murphy, J. (1994).Using wordnet as a knowledge base for measuring semantic similarity between words.Technical report, Technical Report Working Paper CA-1294, School of Computer Applications, Dublin City University. [17] Roy, M. and Muqtadir, M. (2008).Semi-automatic building of wordnet for Bangla.PhD thesis, School of Engineering and Computer Science (SECS), BRAC University. [18] Ruppenhofer, J., Ellsworth, M., Petruck, M., Johnson, C., and Scheffczyk, J. (2010).Framenet ii: Extended theory and practice, available online at h ttp. framenet. icsi. berkeley. edu. [19] Seashore, R. and Eckerson, L. (1940). The measurement of individual differences in general English vocabularies. Journal of Educational Psychology; Journal of Educational Psychology, 31(1):14. [20] Tversky, A. (1977). Features of similarity. Psychological review, 84(4):327. [21] Wang, T. and Hirst, G. (2011).Refining the notions of depth and density in wordnet-based semantic similarity measures. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1003–1011, Stroudsburg, PA, USA. Association for Computational Linguistics. [22] Biemann, C.(2007).Unsupervised Natural Language Processing. In Proceedings of the NAACL-HLT 2007 Doctoral consotorium,Rochester,April 2007, pages 37-40. [23] Lin, D.(1998).Automatic Retrieval and Clustering of Similar Words. In COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2,Pages 768-774. [24] Biemann, C., Shin,S., Choi,K.(2004).Semiautomatic extension of CoreNet using a bootstrapping mechanism on corpus-based co-occurrences. In COLING '04 Proceedings of the 20th international conference on Computational Linguistics,Article No. 1227. [25] Davidov, D., Rappoport, A.(2006).Efficient unsupervised discovery of word categories using symmetric patterns and high frequency</s> |
<s>words.In ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics,Pages 297-304. [26] Davidov, D., Rappoport, A., Koppel, M.(2007).Fully Unsupervised Discovery of Concept-Specific Relationships by Web Mining. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics,Prague,Czech Republic,June 2007,pages 232-239. [27] Biemann, C.(2006).Chinese whispers: an efficient graph clustering algorithm and its application to natural language processing problems. In TextGraphs-1 Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing , Pages 73-80. [28] Gentner, D.(1982). Why nouns are learned before verbs: Linguistic relativity versus natural partitioning. In S. A. Kuczaj, editor, Language development: Vol. 2. Language, thought, and culture, pages 301-334. Erl- baum, Hillsdale, NJ. [29] Quasthoff,U.,Biemann, C.,Wolff, C.Named entity learning and verification: expectation maximization in large corpora,COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20Pages 1-7. [30] Sinha, Manish and Reddy, Mahesh and Bhattacharyya, Pushpak, 2006 “An approach towards construction and application of multilingual indo-wordnet”, 3rd Global Wordnet Conference (GWC 06), Jeju Island, Korea. IJCATM : www.ijcaonline.org</s> |
<s>Proceedings of the...D S Sharma, R Sangal and J D Pawar. Proc. of the 11th Intl. Conference on Natural Language Processing, pages 305–314,Goa, India. December 2014. c©2014 NLP Association of India (NLPAI)Making Verb Frames for Bangla Vector Verbs Sanjukta Ghosh Department of Linguistics Banaras Hindu University Varanasi, INDIA san_subh@yahoo.com Abstract This paper is an initial attempt to make verb frames for some Bangla verbs. For this pur-pose, I have selected 15 verbs which are used as vectors in the compound verb constructions of Bangla. The frames are made to show their number of arguments as well as case markings on those arguments when these verbs are ued alone and when they form part of compound verb constructions. This work can be extended further to make similar verb frames for all other verbs and can be used as a rich lexical resource. Keywords: verb frames, compound verbs, Bangla, arguments and case markings, lexical resource. 1 Introduction Compound verbs had always taken a pivotal role in the linguistic research of Indian languages. They are composed of two verbs and together make a complex predicate. The typical property of the compound verbs in Indian languages is that the second verb or V2 does play a very important role in the meaning composition of the complex predi-cate and occasionally also retains its own argument while forming a compound verb. For these interest-ing properties of the compound verbs, they have been studied extensively in almost all major Indian languages (Hook (1974), Kachru and Pandharipande (1980), Butt (1993, 1995), Kachru (1993) on Hindi, Bashir (1993) on Kalasa, Pandharipande (1993) on Marathi, Fedson (1993) on Tamil, Hook (1996) on Gujrarti, Kaul Vijay on Kashmiri (1985), Rajesh Kumar and Nilu (2012) on Magahi, Mohanty (1992) on Oriya and Paul (2004, 2005) on Bangla). The first verb appears in its perfective form in Bangla compound verbs and the second takes the inflection as in other lan-guages. The vectors of the compound verbs are responsible for providing certain kinds of senses cross-linguistically as shown by Abbi and Gopalkrishnan (1992) with data taking from all four major lan-guage families of India. They have categorized these senses into three types: Aspectual, Adverbial and Attitudinal with some sub-types under these. Under aspectual cross-linguistically a sense of perfectivity or completeness of event (telicity) is found. Adverbial is further divided by them into three types: manner, benefactive and others all showing some way the action of the V1 performed. Under attitudinal type, comes the attitude of the speaker or the narrator towards the action men-tioned by the V1 such as anger, surprise, contempt, disgust etc. These three senses are observed in Bangla compound verbs also. While aspectual and adverbial are most common senses, some cases are found (vectors bOS and rakh) with attitudinal sense of undesirability and intention to make the hearer attentive towards the speaker. 305The present paper takes total 15 major vector verbs of Bangla listed in the work of Paul and pro-vides syntactic frames for them when used alone as well as</s> |
<s>when used as a V2 in a compound verb construction. The goal of the paper is to see wheth-er in a compound verb construction the V2 retains its argument structure and case marking properties or not. The paper as far as my knowledge goes, makes the first attempt to build verb frames for Bangla. The syntactic and semantic information associated with the verb frames can be very useful for classifying and analyzing the verbs as well as for many NLP applications such as information retrieval, text processing, building parser, Machine Translation etc. 2 Syntactic Frames of Bangla Vector Verbs Attempt for classification of verbs is not a new phenomenon for a resource-rich language like Eng-lish. Levin’s verb classification (Levin 1993), based on syntactic and semantic properties is a gi-gantic classic work. VerbNet is a broad-coverage freely available online verb lexicon of English based on Levin’s classification and provides in-formation about thematic roles and syntactic and semantic representations. It is linked to other lexi-cal resources like FrameNet, PropBank and WordNet. However, Indian languages still lack such kind of broad coverage verb resource. Apart from some initial attempt for making verb frames for Hindi (Rafiya et al, undated online version), we don’t find any other work in this direction.For the present work, to build a frame for a verb, I take the syntactic categories in a sentence where the verb appears. Then the NPs of the sen-tence are marked with the thematic roles they bear and the case marker or postposition they take. Apart from the traditional thematic roles found in standard GB theory like agent, experience, patient, instrument, benefactive, source, location and goal, I have used some other roles like associative for the accompanying NP with an agent, distinguished between causer, cause and an intermediate agent in case of a verb which provides the semantics of causation. The noun phrases are classified into lo-cation, time and matter and they bear locative case markers. Therefore, in the frames these different types of NPs are mentioned. Case markers are very important for Indian languages. For one particular theta role, there may be more than one case mark-er. After providing the frame schema, an example sentence of Bangla is given for that particular frame with a gloss and translation in English. In the following subsections the verb frames are developed and described for each of the vectors found in a compound verb construction. 1. /aSa/ ‘come’ Sense 1: Deixis The deictic verb aSa may focus on the source of the movement, the path or the goal. If the source is in focus, the frame has a theke marked source loca-tion NP, if the goal is in focus the argument is locative –e marked and when the path is in focus it is –die or -hoe marked path NP. NP1(Agent) NP2-theke (source) V 1. ami baRi theke aSchi. I house-from come-pr-prog-1p ‘I am coming from the house.’ NP1-Agent NP2-e(goal) V 2. pOrpOr kotogulo durbOl Sorkar khOmotay aSe. Consecutively some weak government power- loc come-pr-3p ‘Some</s> |
<s>weak governments came to power consec-utively.’ NP1(Agent) NP2-hoe/die (Path) V1 3. amra kalna hoe elam. We Kalna via come-pt-1p ‘We came via Kalna.’ 4. amra men lain die elam. We main line by com-pt-1p ‘We came by main line.’ The verb may also take an associative argument marked with –r SoMge/Sathe as in (5). NP1(agent) NP2-r SOMge (associative) V 5. bacchaTa kar SoMge eSeche? Child-cl who-gen with come-pr-pft-3p ‘With whom has the child come?’ The V1 which are compatible with deictic senses are ghora ‘to roam’, phera ‘to return’, bERano ’to travel, to stroll’. The frames of the verbs ghora and bERano require a locative argument. This argument, when combined with aSa in a 306compound verb is merged with the source argu-ment of aSa. As a result, there is only one source marked argument in the compound verb frames of ghure aSa and beRie aSa, which may be dropped occasionally. E.g. NP1(agent) NP2 –theke(source) V1 V2. 6. amra ladakh theke beRie elam. We Ladakh from o travel come-pt-1p ‘We came back travelling from Ladakh.’ When the path is focused in the deictic sense the argument is case marked with postposition die as in the next sentence. NP1(agent) NP2-die(path) V1 V2 7. je pOth die cole eli, Se pOth Ekhon bhule geli. (Tagore song) which way by walk come-pt-2p-NH, that way now forget go-pt-2p-NH ‘Now you forgot the way by which you came past.’ When the goal is in focus, however, the argument of the same compound verb becomes a locative –e/te marked place noun. NP1(agent) NP-e(goal) V1 V2 8. hElheD istinDia kompanir cakri nie bharote cole aSen. Halhed East India Company-gen job with India-loc come-pr-3p-hon back ‘Halhed came back to India with a job in East India Company.’ When the source is in focus, the argument will be theke marked source. NP1(agent) NP2(temporal) V1 V2 9. kintu uni jodi ajkaler moddhei phire aSen.. but he-hon if by today or tomorrow return come ‘What if he comes back by today or tomorrow.’ aSa may also be combined with some verb with one object argument (e.g. phEla ‘to drop’) and in that case that object argument is realized in the frame of the compound verb as the following. However, there is still the deictic sense of the vec-tor retained in such construction. The place of speaking and the place of the first verb cannot be same. NP1(agent) NP20/ke/re (theme) V1 V2 10. tui phele eSechiS kare? You-NH drop come-pr-pft-2p-NH whom (poet-ic) ‘Whom have you left behind?’ NP1(agent) NP2-ke(patient) V1 V2 11. make bole eSechiS? (alternatively bOla may also take locative argument like baRite) Mother-obj. tell come-pr-pft-2p-NH ‘Have you informed mother?’ Sense 2: Duration When aSa as a V2 is used in the sense of duration of the event of V1, it may have a temporal NP with postpositions like theke, dhore (kOtodin dhore). NP1(agent) NP2 (temporal) V1 V2 12. ami kObe theke bole aSchi. I when from say come-pr-prog-3p ‘I have been saying it for a long time.’ NP1(agent) NP2(matter/višay) V1 V2</s> |
<s>13. Se tar pitar onukOroNe o nirdeSe ei karjo kore aSche. (s)he his/her father-gen following-loc and in-struction-e this work do come-pr-prog-3p-NH ‘She has been doing this work following her fa-ther’s instruction.’ Sense 3: gradualness of the event When aSa is used in the sense of gradualness of the event of V1 it may optionally take a manner adverb phrase like dhire dhire, aste aste ‘slowly’. (AdvP) NP1(undergoer) V1 V2 14. (dhire dhire) cokh (ghume) juRe aSche. Slowly eyes in sleep close come-3p-pr-prog ‘The eyes have been closing slowing.’ 2. rakha ‘to keep’: This verb when used alone takes a locative place argument. NP1(agent) NP2 (theme) NP3-e/-er +a locative postposition V (locative) 15. boiTa kothay rekhechiS? book-cl where keep-pr-pft-2p-NH ‘Where have you kept the book?’ Sense 1: Aspectual 307When used in the aspectual sense as a vector, it combines with verbs which take one object argu-ment and that is realized in the frame of the com-pound verb, aspectual difference of the two senses of rakha as a vector is not reflected in the verb frame. In both cases the V1 is verb with one object argument but the difference is in the type of the verb, with the first sense a process verb and with the second sense a perceptual verb resulting some cognitive effect is found as a V1. E.g. NP1(agent) NP2-0/ke (patient) V1 V2 16. phulTa tule rakhiS. flowers pick keep-2p-fut-imp-NH ‘Pick up the flowers (with a telic reading).’ 17. benarOs tar oitijjho dhore rekheche. Benaras its tradition hold keep-pr-pft-3p ‘Va-ranasi has kept up its tradition.’ Sense 2: to make the hearer attentive/attitudinal sense of the speaker It is often used in imperative mood and generally comes with the adverbial phrase bhalo kore or mon die which in this context mean carefully/with at-tention. E.g. NP1(agent)NP2-0/ke (patient) AdvP V1(perception verb) V2 18. kOthaTa bhalo kore Sune rakho. Word-cl well hear keep-pr-imp-2p-NH ‘Listen to the words carefully.’ 3. ana ‘to bring’: The basic frame has a subject and an object argument and a source from where the object is brought. The verb ana semantically involves a movement of things from one place to another. NP1(agent) NP2-theke(source) NP2-0/ke (theme) 19. tumi dilli theke ki enecho? you what bring-pr-pft-2p-MH ‘What have you brought?’ When it is combined with another verb with an object argument there is sharing of argument and only one object is realized in the frame of the com-pound verb. NP1(agent) NP2-0/ke (theme) V1 V2 20. kake dhore enechi, dEkho. whom catch bring-pr-pft-1p see-imp-2p ‘See, whom have I brought?’ 4. dea ‘to give’: It loses its core sense when used as a vector. The frame of dea alone is the follow-ing. NP1 (agent) NP2-ke (recipient) NP3 (theme) V 21. ke kake ki dilo? Who whom what give-pt-3p-NH ‘Who gave what to whom?’ When used as a vector with any other process verb in complete aspectual sense, it does not retain any of its arguments. The resulting sentence structure has the arguments of V1. The semantics of this vector is the action</s> |
<s>of V1 will be beneficial for somebody other than the actor. NP1(agent) NP2-0/ke (patient) V1 V2 22. tumi ki kajTa kore debe? you QP work-cl do-give-fut-2p-MH ‘Will you do the work?’ 5. nea ‘to take’ The basic frame of the verb is the following with a theke marked source argument. NP1(agent) NP-2-theke (source) V 23. eTa kotha theke nile? This-cl where from take-pt-3p ‘Where did you take it from?’ However, the source argument is lost when it is used in a compound verb frame as a V2 and the frame follows the first verb completely. The se-mantics is the action of V1 is done for the benefit of the actor. E.g. 24. taRataRi khee ne, iskuler deri hoe jacche. quickly eat take-pr-imp-2p-NH, school-gen. late become go-pr-prog ‘Take the food quickly, (you) are getting late for the school.’ 6. tola ‘to pick up’ The basic frame of the verb consists of an agent of picking up, a source of pick up and the theme/ pa-tient to be picked up. NP1(agent) NP2- 0/ke (theme) NP3-theke (source) 30825. ami bacchader skul theke tulbo I children school from pick up-fut-1p ‘ I will pick up the children from the school.’ The source argument is not found in the frame of the compound verb with an accomplishment V1. NP1(agent) NP2-0/ke (patient) (AdvP) V1 V2 26. Condrobabu naiDuke notun rajdhani SohorTa dhire dhire goRe tulte hObe. Candrababu Naidu-Dat new capital city-cl slowly build pick be-fut-modal ‘Candrababu Naidu has to build up the new capital city slowly.’ 7. /oTha/ ‘to rise’ Primary frame of the verb has an agent argument with a locative/temporal argument if it is used in the sense of physical rising or ascending or getting up or some change in the position. NP1(agent) NP2-0/e (location/temporal) V 27. bacchara chade uTheche. kids roof-loc ascend-pr-pft-3p ‘The kids have ascended on the terrace.’ 28. amra bhorbEla uThechi. We dawn get up-pr-pft-1p ‘We have got up early in the morning.’ 29. Surjo purbodike oThe. The sun east-loc rise-generic-pr-3p ‘The sun rises in the east.’ This verb when used as a vector with gORa ‘to build’ makes an unaccusative complex predicate where the subject of the construction is actually the complement of the verb. NP1(patient) NP2 (location/temporal) V1 V2 30. Sob purono Sobbhotai nodir dhare goRe uThechilo. All old civilizations river-gen side-loc build rise-pt-3p ‘All the old civilizations were built in the river banks.’ In the second sense of the vector, it contributes to the modality of the first verb and mostly used with interrogation or negation. The V2 oTha in this sense is always with the suffix –te with the auxilia-ry para denoting ability. NP1(agent) NP2(patient) V1 V2-te para 31. ami Ekhono lekhaTa likhe uThte pari ni. I so far article-cl write rise can not ‘So far I have not been able to manage to write the article.’ NP1(agent) ki NP2-0(patient) V1 V2 para? 32. tumi ki kajTa kore uThte perecho? you QP work-cl do rise can-pr-pft ‘Have you been able to manage to do the work?’ In</s> |
<s>the third use of the vector oTha it is used very commonly with the manner adverb hOthat ‘sud-denly’ and denotes the suddenness or unexpected-ness of the first event. NP1(agent) AdvP V1 V2 33. baccaTa hOThat keMde uThlo child-cl suddenly cry rise-pt-3p ‘Suddenly the child cried.’ 8. pORa ‘to fall’ The verb pORa as mentioned before denotes a change of place of its subject. The subject under-goes an action of falling from a certain source of-ten marked with theke. The location of falling may also be marked in the sentence occasionally with locative –e. However, none of these may be pre-sent in a sentence. The subject gets the theme theta role and it is actually raised to the position of the subject from the complement position of the verb. Therefore, pORa is an unaccusative predicate. NP1-theke(source) NP2-e(location) NP3 (theme) 34. kal rate gach theke bagane Onek am poReche. last night tree from garden-loc many mangoes fall-pr-pft-3p ‘Many mangoes have fallen in the garden last night.’ When it is used with a stative V1 in a compound verb frame it emphasizes the resulting state of the V1 after certain motion. The frame needs only one compulsory theme argument which is realized as subject NP. The argument is actually a theme can be checked from the fact that in this construction another object is not allowed. For instance, * baccchaTa lOmba ghum ghumie poReche is un-grammatical where lOmba ghum ‘long sleep’ is 309inserted as an object. However, with the single verb ghumono ‘sleep’ the object argument is per-fectly acceptable. NP1(theme) V1 V2 35. bacchaTa ghumie poReche. Child-cl sleep fall-pr-pft-3p ‘The child has fallen asleep.’ 9. bOSa ‘to sit’ The verb frame of the single intransitive verb boSa has one compulsory agent argument with one op-tional (often unrealized in a sentence) location of seating. NP1(agent) NP2-e/te (location) V 36. tumi maTite boSle kEno? You floor-loc sit-pt-2p-NH why ‘Why did you sit on the floor?’ However, when used in a compound verb frame it refers to only aspectual function of the V1, sud-denness or unexpected nature of the event. The verbs with which it goes are transitive like bOla ‘to say’, kOra ‘to do’. NP1(agent) NP2(time or location) V1 V2 (ke jane) 37. meeTa kOkhon ki bole boSbe ke jane girl-cl when what say sit-fut-3p who knows ‘Who knows, when the girl will say what (un-expected)!’ 10. phEla ‘to drop’ The verb phEla when used alone takes an agent argument, a theme and optionally a location of dropping. NP1(agent) NP2(theme) NP3-e/te (location) V 38. baccaTa cabiTa nice phelechilo child-cl key-cl downstairs-loc drop-pt-pft-3p ‘The child had dropped the key downstairs.’ As a vector it is used in the sense of ‘to do the event of V1 without hesitation or further consid-eration’ as in bole phEla ‘speak out’, kore phEla ‘do without hesitation’. The frame has one agent and patient with some manner adverbial. NP1(agent) NP2(patient) AdvP V1 V2 39. Oto bhabchiS ki, kajTa cOTpOT kore phEl. Much think-pr-prog-2p-NH what, work-cl fast do drop-imp-2p-NH ‘What are you</s> |
<s>thinking so much? Do the job quickly.’ The second use in the sense of an unwanted event of V1 can be found with the same set of verbs. The frame typically generally has question words kOkhon/kothay and ki as time location and patient argument. NP1(agent) NP2(time or location) NP3(patient) V1 40. o je kOkhon ki kore phelbe tar Thik nei. (s)he comp when what do drop-fut-3p its clear not ‘It is not clear, when he will do what (unexpect-edly).’ Notice that same verb may be used with both the senses. The third use is completely aspectual and it indi-cates the telicity of the event of V1. The transitive verbs with an overt argument go with this reading of phEla. E.g. chuMRe phEla ‘to throw out’, bheMe PhEla ‘ to break’, gile phEla ‘to swallow’. NP1(agent) NP2(theme) NP3-e/te(location) V1 V2 41. tini phuler toRaTa baje kagojer jhuRite chuMRe phellen. (s)he flower-gen bouquet-cl waste paper bas-ket-loc throw drop-pt-3p ‘She threw the bouquet of flowers in the waste paper basket.’ 11. jaoa ‘to go’ As a single verb jaoa has an agent of the motion and a goal or destination where the agent goes. It may also alternatively refer to the path of the mo-tion marked by the postposition –die/hoe. NP1(agent) NP2 (goal) V 43. ami dilli jacchi. I delhi go-pr-prog-1p ‘I am going to Delhi.’ When it is used as a vector, it loses the goal argu-ment and it is used with some time adverbial de-noting duration of the event. The TAM expressed is either present perfect continuous or habitual pre-310sent and it is realized by either simple present con-tinuous or simple present in the language. NP1(agent) NP2(temporal) V1 V2 44. meeTa Sokal theke khele jacche. girl-cl since morning play go-pr-prog-3p ‘The girl has been playing (continuously) since morning.’ NP1(agent) NP2(temporal) AdvP V1 V2 45. lokTa Saradin cupcap kaj kore jay. man-cl whole day silently work do go-pr-3p ‘The man keeps on working silently for the whole day.’ (habitually) 12. cOla ‘to go on/to work (for a thing)’ cOla as a main verb may mean ‘to walk or move’ or ‘to go on or to work (for a thing)’. In the first sense, it takes an agent and a path or a goal argu-ment as in (44). NP1(agent) NP2 (path) V 46. amra Onek rasta collam. We long way walk-pt-1p ‘We walked a long way.’ 47. tumi kothay colle? You where go-pt-2p ‘Where are you going?’ (past form used in pre-sent sense) When used in the sense of ‘to go on’, cOla takes one theme argument which is realized as the sub-ject. The verb frame may occasionally have a manner adverbial. NP1(theme) AdvP V 48. purodOme kaj colche. in full swing work go on pr-prog-3p ‘The work is going on in full swing’. 49. ghoRiTa ThikThak cloche. Clock-cl alright work-pr-prog-3p ‘The clock is working alright.’ The verb when used as a vector provides aspectual information of the main verb. It is used with pre-sent perfect continuous TAM realized as present perfect and</s> |
<s>often has a time adverbial denoting duration of the V1. NP1(agent) NP2 (temporal) V1 V2 50. cheleTa Sokal theke kaj kore coleche. boy-cl morning since work do move-pr-pft-3p ‘The boy has been working continuously since morning.’ 13. paThano ‘to send’ As a main verb paThano takes two object argu-ments one theme and one recipient along with an agent. NP1(agent) NP2(recipient) NP3(theme) V 51. ami tomake boiTa paThacchi. I you-obj book-cl send-pr-prog-1p ‘I am sending you the book.’ As a vector it is used in the sense of causativization. There are causer, causee and an intermediates agent with postposition die. NP1(causer agent) NP2-die (instrument) NP3-ke (causee) V1 V2 52. SahoS dEkho, jhike die amake Deke paThieche. audacity look, maid-obj by summon send-pr-pft-3p ‘Look at the audacity, (she) has summoned me through a maid-servant.’ 14. bERano ‘to roam’ When used alone bERano takes an agent and a lo-cation argument. NP1(agent) NP2-te(location) V1 53. tOkhon amra bagane beRacchilam. Then we garden-loc roam-pt-prog-3p ‘Then we were roaming in the garden.’ When used as a vector it is used in the sense of ‘non-directional movement’ (Paul (2010) says this as random action of V1 without discretion) and instead of a location may take a temporal or man-ner adverbial. NP1(agent) NP2 (temporal) V1 V2 54. ma-mOra meeTa Saradin keMde bERacche. mother-died girl-cl whole day cry roam-pr-prog-3p ‘The girl whose mother has died has been cry-ing here and there for the whole day.’ AdvP NP1(agent) V1 V2 55. Sudhu Sudhu ghure bERaccho kEno? 311 worthlessly wander roam-pr-prog-2p why ‘Why are you wandering here and there worth-lessly?’ 15. mOra ‘to die’ mOra is used as an intransitive verb with one undergoer of the event. NP1(undergoer) V 56. lokTa moreche man-cl die-pr-pft-3p ‘The man has died.’ When used in a compound verb construction, it denotes futility of the event of V1. NP1(agent) (AdvP) V1 V2 57. orOkom keMde morchiS kEno? Such cry die-pr-prog-3p why ‘Why are you crying such futile?’ When it is combined with a transitive verb like bhaba ‘to think’, it retains the object argument of that verb in the resulting compound verb frame. NP1(agent) NP2 (patient) V1 V2 58. ami Saradin tomar kOtha bhebe morchi ar tomar kono kheali nei? I whole day your word think die-pr-prog-3p and you-gen any concern-emph neg ‘I have been thinking of you only for the whole day and you don’t have any concern at all?’and tables below the body, using 10 point text. 3. Findings and Results From the above data total 45 tokens of verb frames are found and they are listed below together with their respective verbs. i. NP1(Agent) NP2-theke (source) V ii. NP1(Agent) NP2-e(goal)V iii. NP1(Agent) NP2-hoe/die (Path) V iv. NP1(agent) NP2-r SOMge (associative) V v. NP1(agent) NP2-0/ke/re (patient) V1 V2 aSa vi. NP1(agent) NP20/ke/re (theme) V1 V2 vii. NP1(agent) NP2(temporal) V1 V2 viii. NP1(agent) NP2((matter/višay) V1 V2 ix. (AdvP) NP1(undergoer) V1 V2 x. NP1(agent) NP2-e/-er +a locative postposition (location) V xi. NP1(agent) NP2 0/ke/re (patient) V1 V2 rakha xii. NP1(agent) NP20/ke (patient) AdvP V1 V2 xiii.</s> |
<s>NP1(agent) NP-theke (source) NP20/-ke/re (theme) V xiv. NP1(agent) NP2-0/ke/re (theme) V1 V2 ana xv. NP1(agent) NP2-ke (recipient) NP3 (theme) xvi. NP1(agent) NP2-0/ke (patient) V1 V2 dea xvii. NP1(agent) NP-2-theke (source) NP3 (theme) V As a vector it follows frame of V1. Nea xviii. NP1(agent) NP2- 0/ke (theme) NP3-theke (source) V xix. NP1(agent) NP2 (patient) (AdvP) V1(accomplishment) V2 tola xx. NP1(agent) NP2 (location/temporal) V xxi. NP1(theme) NP2 (location/temporal) V1 xxii. NP1(agent) NP2(patient) V1 V2-te para neg oTha xxiii. NP1(agent) ki (QP) NP2(patient) V1 V2 para? xxiv. NP1-theke(source) NP2-e(location) NP3(theme) V xxv.NP1(patient) V1 V2 pORa xxvi. P1(agent) NP2(theme) NP3-e/te(location) V xxvii. NP1(agent) NP2(patient) AdvP V1 V2 xxviii. NP1(agent) NP2 (temporal/location) NP3(patient) V1 V2 xxxix. NP1(agent) NP2(theme) V1 V2 phEla xxx. NP1(agent) NP2 (goal) V xxxi. NP1(agent) NP2 (temporal) V1 V2 jaoa xxxii. NP1(agent) NP2 (temporal) AdvP V1 V2 xxxiii. NP1(agent) NP2-0(theme) NP3-ke(recipient) V xxxiv. NP1(causer) NP2-die (instrument) NP3-ke(causee) V1 V2 paThano xxxv. NP1(agent) NP2-te(location) V1 xxxvi. NP1(agent) NP2(temporal) V1 V2 xxxvii. AdvP NP1(agent) V1 V2 bERano xxxviii. NP1(undergoer) (AdvP/AdvCl) V xxxix. (AdvP) NP1(agent) NP2 (patient)V1 V2 mOra 312 xl. NP1(agent) NP2-0/te or -r locative postposi-tion (location)V xli. NP1(agent) NP2(temporal/location) NP3 (pa-tient) V1 V2 (ke jane) bOSa xlii. AdvP NP1(theme) V xliii. NP1(agent) NP2 (temporal) V1 V2 cOla in the sense of ‘go on/work’ xliv. NP1(agent) NP2(path) V xlv. NP1 (agent) NP2(goal) V cOla in the sense of ‘move’ We can classify these 45 tokens of verb frames into 17 types. i. NP1(Agent) NP2-theke (source) V ii. NP1(Agent) NP2-e(goal)V iii. NP1(Agent) NP2-hoe/die (Path) V1 iv. NP1(agent) NP2-r SOMge (associative) V v. NP1(agent) NP2-0/ke/re (patient) V1 V2 vi. NP1(agent) NP20/ke/re (theme) V1 V2 vii. NP1(agent) NP2(temporal) V1 V2 viii. NP1(agent) NP2((matter/višay) ) V1 V2 ix. (AdvP) NP1(undergoer) V1 V2 x. NP1(agent) NP2-e/-er +a locative postposition (location) V xi. NP1(agent) NP20/ke (patient) AdvP V1 V2 xii. NP1(agent) NP-theke (source) NP20/-ke/re (theme) V xiii. NP1-theke(source) NP2-e(location) NP3(theme) V xiv. NP1(agent) NP2-ke (recipient) NP3 (theme) V xv. NP1(undergoer) (AdvP/AdvCl) V xvi. AdvP NP1(theme) V xvii. NP1(causer) NP2-die (instrument) NP3-ke(causee) V1 V2 The general observation which comes out from these frames are intransitive verbs are generally used with an adverbial phrase or clause which de-picts the manner of the event. They may also have a temporal or spatial NP in their frames which de-note the time or the location of the event. Howev-er, they no longer need their compulsory locative NP argument or manner adverbial phrase or clause in the compound verb structure. The V2s when used in pure aspectual sense as in jaoa, aSa (one sense is aspectual), rakha (one sense only) and cOla or aspectual and some benefactive adverbial sense as in deoa and nea do not retain their original syntactic arguments in the compound verb struc-ture. With the other adverbial senses like sudden-ness or unexpectedness of the event with the V2s phEla and bOSa, the argument and case marking of the vector verb are not retained in the new com-pound verb structure. The deictic motion verb aSa may retain its goal, source or path arguments</s> |
<s>in a compound verb structure when it is combined with a non-directional motion verb which talks about manner of the motion. The other verb ana which has a source deictic motion sense retains it in the compound verb structure also. 3 Conclusion This work is only a first attempt to build a knowledge-base lexical resource for Bangla verbs and has a very wide scope to extend it to fully de-velop frames for all the verbs of Bangla and classi-fy them on the basis of syntactic and semantic information. At present, the work does not incor-porate any semantic feature in the argument struc-ture of the verbs but a future work can incorporate that to classify the nature of the arguments with same semantic or theta role. Once such a resource is fully developed it can be linked to the Indo Wordnet also. References Abbi, A. and Gopalkrishnan, D. 1992. “Semantic Ty-pology of Explicator Compound verbs in South Asian Languages”. Paper retrieved online. Bashir, Elena (1993) “Causal chains and compound verbs.” In M. K. Verma ed. (1993) Complex Predi-cates in South Asian Languages. Manohar Publishers and Distributors, New Delhi. Butt, Miriam (1993) “Conscious choice and some light verbs in Urdu.” In M. K. Verma ed.(1993) Complex Predicates in South Asian Languages. Manohar Pub-lishers and Distributors, New Delhi. Butt, M. 1995. The Structures of Complex Predicates in Urdu. Dissertations in Linguistics. Stanford: CSLI. Fedson, V. J. (1993) “Complex verb-verb predicates in Tamil.” In M. K. Verma ed. (1993) Complex Predi-cates in South Asian Languages. Manohar Publishers and Distributors, New Delhi. Hook, Peter (1974) The compound verb in Hindi. Ann Arbor: University of Michigan center for South and Southeast Asian Studies. Hook, Peter (1996). The Compound Verb in Gujarati and its Use in Connected Text. R.T. Vyas (ed.) Con-sciousnessManifest: Studies in Jaina Art and Iconog-raphy and Allied Subjects in Honour of Dr. U.P. Shah. Vadodara: Oriental Institute. 339-56. 313Kachru, Yamuna (1982) “Pragmatics and compound verbs in Indian languages.” O. N. Kaul ed. Topics in Hindi Linguistics. Bahari Publications, New Delhi. Kachru, Yamuna and R. Pandharipande (1980) “To-wards a typology of compound verbs in South Asian Languages.” Studies in Linguistic Sciences, 10:1, 113-24. Kaul, Vijay. K. (1985) The Compound Verb in Kash-miri. Unpublished Ph.D. dissertation. Kurukshetra University. Levin, B. 1993. English Verb Classes and Alternations: A Preliminary Investigation. The University of Chi-cago Press. Mohanty G. 1992.The Compound Verbs in Oriya. PhD dissertation. Deccan College Post-graduate and Re-search Institute. Paul, S. 2004. An HPSG Account of Bangla Compound Verbs with LKB Implementation. Ph.D disserta-tion,University of Hyderabad, Hyderabad. Paul,S. 2005. The semantics of Bangla Compound Verbs. Yearbook of South Asian Languages and Lin-guistics. 101-112. Rafiya Begum et al (undated online version). Develop-ing Verb Frames for Hindi. IIITH. Rajesh Kumar and Nilu. 2012. Magahi Complex Predi-cates. IJDL. 314</s> |
<s>Extracting Semantic Relatedness For Bangla WordsAbdullah Al Hadi1, Md. Yasin Ali Khan2 and Md. Abu Sayed3 Department of Computer Science and Engineering, 1, 2 Chittagong University of Engineering and Technology, Chittagong-4349, Bangladesh 3 American International University-Bangladesh, Banani, Dhaka-1213, Bangladesh 1 10abdullah61@gmail.com, 2 shihabyasin@gmail.com, 3 abusayed93.cse@gmail.com Abstract— a framework for extracting semantic relational words in Bangla is presented in this paper. Here extraction of Synonyms, Antonyms, Hyponym, Hypernym, Meronym, Holonym and Polysemy are primarily investigated as a rule based model. For every word two other things: concept and parts of speech category are also presented for clarification. A semantic analyzer is used to extract these relations from nouns, adjectives and verbs. Keywords— semantic relatedness,relation extraction, rule based model, semantic similarity. I. INTRODUCTION Natural Language Processing (NLP) is a growing field of interest for researchers of computer science, artificial intelligence, linguistics and human computer interaction [1]. Semantic relations are unidirectional underlying connections between concepts because it studies meaning of a language. Language processing consists of morphological, syntactic, semantic and pragmatic analysis steps where semantic relatedness is important. Among two types of semantic approaches ‘Compositional Semantics’ deals with the meaning of individual units. Then it helps forming larger units. On the other hand ‘Lexical Semantics’ identify and represent semantics of each lexical item. This helps to understand meaning of larger units. Semantic relatedness has many important applications in inference, reasoning, Question Answering, Information Extraction, Machine Translation and other NLP applications. Actually semantic relations work like building blocks for creating a semantic structure of a sentence. Semantic relatedness implies degree to which words are associated via any relation like synonymy, meronymy, hyponymy, hypernymy, functional, associative and other types of semantic relationships. It has immense application in information retrieval, automatic indexing, word sense disambiguation, automatic text correction etc. This paper will propose a rule based approach for measuring semantic relatedness between Bangla words. The semantic relatedness between words is computed based on their features they possess using some predefined rules. II. RELATED WORKS In literature different works on semantic relatedness and relation extraction are found. One of the earlier work from Princeton University was WordNet[2] in English by George Miller in 1985. Now it is directed by Christiane Fell Baum[3]. Mentionable other works are FrameNet[4], PropBank[5] and feature based similarity model by[6]. Using multiple information sources semantic similarity between words was investigated by Li et al.[7]. Relations between nominal was investigated by Girju et al.[8] and between noun phrases were investigated by Davidov[9] and Moldovan[10]. Also relation between named entities and clauses were investigated by Hirano et al.[11] and Szpakowicz et al.[12] respectively. Measures of semantic similarity and relatedness in the biomedical domain were investigated by Ted Pedersen et al.[13] Based on corpus statistics and lexical taxonomy similarity was investigated by[14]. Similarity measurement based on web search engine described by Bollegara et al[15] and Cilibrasi et al.[16] for Google. Wikipedia-based semantic relatedness can be found in [17, 18]. Das et al. [19] developed a Semantic Net in Bangla that are basically based on common usage</s> |
<s>of Bengali people. For Bangla based on Princeton Word Net IIT Bombay gives a miniature idea, only for synonyms. M.Khan proposed some modification there [20]. Getting motivation from above works we would like to propose an automated as well as independent semantic relation extractor with a set of semantic features for Bangla words. Main investigation of this paper can be stated as: • To design a semantic relation extractor that can identify the relationships among Bangla words. • To implement the system by proposing a set of semantic features for Bangla word categories. • To verify the system for several kinds of Bangla words. III. SEMANTIC RELATIONS Theoretically semantic relations can be described by R(x, y) where R is the relation type and x, y are first and second arguments correspondingly. This section is for discussion about some relations between lexical items. • Synonymy: Refers to words that are pronounced and spelled differently but contain the same meaning. Such as anondo(আনn), ullash(ulাস), khushi(খু িশ) are synonyms. • Antonymy: Refers to words that are related by having the opposite meanings to each other. Such as hasi(হািস) and kanna(কাnা) are antonyms to each other. 978-1-5090-1269-5/16/$31.00 ©2016 IEEE2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 10• Hyponymy and Hypernymy: Refers to a relationship between a general term and the more specific terms that fall under the category of the general term. For example, the colors lal(লাল), sobuj(সবজু), sada(সাদা) and holud(হলদু) are hyponyms. They fall under the general term of rong(রঙ), which is the hypernym of the above colors. • Polysemy: A single word or phrase with two or more distinct meanings. For example: pata (পাতা): Leaf of tree. pata (পাতা): Page of books. • Holonymy and Meronymy: A semantic relation that exists between a term denoting whole (the holonym) and a term denoting a part that pertains to the whole (the meronym). For example, angul(আ লু) is a meronym of hat(হাত) because angul(আ লু) is part of a hat(হাত) and hat(হাত) is a holonym of angul(আ লু). In a language a word may appear in more than one grammatical category and within that grammatical category it can have multiple senses. Lexical semantic relations support the grammatical categories namely Noun (িবেশষয্), Adjective (িবেশষণ) and Verb (িkয়া). IV. MATHEMATICAL REALIZATION In this section mathematical description of semantic relatedness will be given. Let, W1 be the input word and F1 = {f11, f12, f13…………….…, f1n} is the set of features of the word W1. Now R is a relation (e.g. synonymy, antonymy, hypernymy, hyponymy, polysemy and holonymy) to find word W2 which should be related to W1 in such a way that W1 and W2 resembles with the definition of R. R {W1 (F1)} = W2 (F2) (1) Meaning W1 and W2 are R related. Where F2= {f21, f22, f23…, f2n} is the set of features of word W2. For the relation synonymy, W1 and W2 will share all their features with equal value. For antonymy, W1 and W2 will share almost all of</s> |
<s>their features except one and this one contains the reverse value. For hypernymy, W2 will share almost all of the features of W1 except one and this one defines a general term i.e. it contains a neutral value. For hyponymy, W2 will share almost all of the features of W1 except one and this one contains a neutral value for W1 and polar (positive or negative) value for W2. For polysemy, W1 and W2 will be same (i.e. W1=W2) but features are different (i.e. F1 is not exact equal to F2). For meronymy, W2 will share almost all of the features of W1 except one and this one contains a fractional value in F2. For holonymy, W2 will share almost all of the features of W1 except one and this one contains a fractional value in F1 but not F2. V. METHODOLOGY AND SYSTEM ARCHITECTURE The key objective of our work is to design a semantic relation extractor that can identify different relational words. The schematic representation of our proposed analyzer is illustrated in Fig. 1. Fig. 1 Schematic representation of proposed system First of all, some words were selected with effective features to store in the database using input interface. These features are chosen in such a way that they can illustrate how words are both similar and/or different and emphasizes the uniqueness of each word. For example, features for the words “আনn” and “ulাস” will be [Animate (-1), Human (+1), Gender (0), Emotion (+1)] and for the word “দঃুখ” [Animate (-1), Human (+1), Gender (0), Emotion (-1)]]. Again, for “aনভূুিত”, features will be [Animate (-1), Human (+1), Gender (0), Emotion (0)]. The words may be Noun(িবেশষয্), Adjective(িবেশষণ) or Verb(িkয়া). In database engine these words will be kept in different tables, since features of each word categories are different. In linguistic, this database engine is called Lexicon which is a dictionary of words where each word contains some syntactic, semantic and some possible pragmatic information. Example Database tables and their corresponding features are illustrated as below in tables. 2, 3,4,5,6 and 7. Words baba pita ma manush chokh Features Countable 1 1 1 1 1 Common -1 -1 -1 1 0 Animate 1 1 1 1 1.1 Human 1 1 1 1 1.1 Honourable 1 1 1 0 x Gender 1 1 -1 0 x Adult 1 1 1 0 x Material x x x x x Solid x x x x x Table - 2: Example Noun Table from Database Column Name Features’ Value Description countable Countable=1 Uncountable=-1 Common Common=1 Proper=-1 Neutral = 0 Not applicable = null(x) Animate Animate=1 Inanimate=-1 Not applicable = null(x) Person Person =1 Neuter = -1 Not applicable = null(x) Honorable Honorable=1 Non-honorable=-1 Neutral = 0 Not applicable = null(x) Gender Male=1 Female=-1 Neutral = 0 Not applicable = null(x) Adult Old/Very Old = 2 Middle Age = 1 Young = -1 Little age / child =-2 Neutral = 0 Not applicable = null(x) Material Material = 1 Abstract = -1 Not applicable</s> |
<s>= null(x) Solid Solid = 1 Non-Solid=-1 Not applicable = null(x) Table - 3: Feature Description of Noun Words anondo ullash dukkho valo chalak abeg Features Animate -1 -1 -1 1 1 -1 Human 1 1 1 1 1 1 Gender 0 0 0 0 0 0 Quality x x x 1 2 x Emotion 1 1 -1 x x 0 Quantity x x x x x x Size x x x x x x Beauty x x x x x x Table - 4: Example Adjective Table from Database Column Name Features’ value description Animate Animate = 1 Inanimate = -1 Human Human = 1 Neuter = -1 Gender Male = 1 Female = -1 Neutral = 0 Quality Good Quality = Positive value(+) Bad Quality = Negative value(-) For distinguishing = 1,2,3,4 Neutral = 0 Not Applicable = Null(x) Emotion Good Emotion = Positive value (+) Bad Emotion =Negative value (-) Neutral = 0 Not Applicable = Null(x) Quantity Large Quantity = Positive value (+) Small Quantity= Negative value (-) For distinguishing = 1,2,3 Neutral = 0 Not Applicable = Null(x) Size Big Size = Positive value (+) Small Size = Negative value (-) For distinguishing = 1,2,3,4 Neutral = 0 Not Applicable = Null(x) Beauty Beautiful = Positive value(+) Ugly = Negative value (-) Neutral = 0 Not Applicable = Null(x) Table - 5: Feature Description of Adjective Words Jog_ kora Biog_ kora Deoa Neoa Prodan_ kora Poriborton_ kora Features Animate -1 -1 -1 -1 -1 0 Person 1 1 1 1 1 1 Gender 0 0 0 0 0 0 Move x x x x x X Change 1 -1 2 -2 2 0 State x x x x x x Decision x x x x x x Table - 6: Example Verb Table from Database Column Name Features’ value Description Animate Animate = 1 Inanimate = -1 Human Human = 1 Neuter = -1 Gender Male = 1 Female = -1 Neutral = 0 Move In = Positive value(+) Out = Negative value(-) Neutral = 0 Not Applicable = Null(x) Change Value upgrading/Possessing/Constructing = Positive value(+) Value degrading/Give up/Destructing = Negative value(-) For distinguishing = 1,2 Neutral = 0 Not Applicable = Null(x) State Continuity/Starting = Positive value(+) Discontinuity/Ending = Negative value(-) For distinguishing = 1,2 Neutral = 0 Not Applicable = Null(x) Decision Supportive Decision = Positive value(+) Anti-Supportive Decision = Negative value(-) Neutral = 0 Not Applicable = Null(x) Table -7: Feature Description of Verb Table 2 to 7 describes in details what features and their corresponding value range had been chosen for Noun, Adjective and Verb correspondingly. VI. ILLUSTRATED EXAMPLE Take a sample word, for example “আনn” from user interface (Fig 8) for processing. Fig. 8: User Interface The word will be searched in each table of the Lexicon by Word Query. Queries are the primary mechanism for retrieving information from a database. Many database management systems use the Structured Query Language (SQL) standard query format. Word Query will result a pointer</s> |
<s>value (Table T and Row R). Feature Extractor will extract all features [Animate (-1), Human (+1), Gender (0), Emotion (+1)] of the pointer (T, R) which are the key element of Relation Analyzer. The analyzer will analyze the extracted feature for each relation. The acceptability of our work is mainly depending on this step. Then the analyzer will build a query from analyzed data to extract the closely related word(s) from Lexicon. For synonym, it will extract the word “ulাস” since its features are same [Animate (-1), Human (+1), Gender (0), Emotion (+1)] and for antonym, it will extract the word “দঃুখ” since it’s at least one feature [Emotion (-1)] is opposite. Again, it will extract the word “aনভূুিত” for hypernym since a feature [Emotion (0)] is not clearly defined. Then the word(s) and possible some other information (Category, Sub-category, Concept, Example) will be shown in the Output Interface. VII. EXPERIMENTS A. System Requirements An Intel(R) Core(TM) i3-2100 CPU with 3.10GHz is used having 4GB Ram and 32bit operating system. B. Implementation: For designing this system Java is used as computer language and SQLite as Database. C. Evaluation and Measurement: For evaluating some words selected randomly and after inputting the words into the system performance had been measured. D. Limitation: There is no ideal convention for selecting features. This is totally subjective and dependent highly on application domain. VIII. RESULTS For measuring performance of our model engine we choose random sampling method. We randomly selected 80 words for testing, and take a note of number of words where all relations correctly retrieved and number of words where at least one relation incorrectly retrieved. After several experiments we’ve calculated average number of words where all relations correctly retrieved and average number of words where at least one relation incorrectly retrieved. Then we measure error and accuracy using formulas like below: After experimenting randomly with different Bangla words taken from the built in corpora we have seen mentionable performance that are shown in Table. 9. Word Category No. of input words(Random Sampling) taken to test Average No. of words where all relations correctly retrieved Average No. of words where at least one relation incorrectly retrieved Error Accuracy Noun 80 75 5 6.25% 93.75% Adjective 80 78 2 2.5% 97.5% Verb 80 78 2 2.5% 97.5% Table - 9: Experimental Result Overall accuracy = 96.25% and Error = 3.75% The reason for the lower accuracy of nouns is due to its word variation that is not always possible to identify each noun word specifically. Many nouns are very general enough that to identify that noun separately extra one specific feature must be added. By adding more proper features, accuracy of the system may be increased. Major limitation of this work is relatively small size of lexicon compared to other works in non-Bengali languages and also personal influences on selecting features for different types of words as there is no state-of-art rules for it. To best of our knowledge this is the first work</s> |
<s>in Bengali literature extracting semantic relatedness. IX. CONCLUSION A feature based semantic relatedness system is presented for Bangla. The various semantic features can indicate the semantic structure of a word. It does not depend on specific lexical resources or knowledge representation languages. As it uses own source of data, it maximizes the coverage of possible interpretations. In this work as feature engineering is highly subjective more analytical review may increase performance. Experimental results show satisfactory performance. Future research will be conducted on extending feature set for more lexical units such as noun phrase, multiword expression with more effective features and with more words from Bangla language. It will also be interesting to investigate semantic distances. REFERENCES [1] Wikipedia. “Natural Language processing”, Wikipedia.org. Available:http://en.wikipedia.org/wiki/Natural_langage_processing [Last Modified: 3 March 2015, 16:51]. [2] Miller, George A. WordNet: A Lexical Database for English. Communications of the ACM, 1995, 38:39–41. [3] C. Leacock and M. Chodorow, Combining Local Context and WordNet Similarity for Word Sense Identification in WordNet, an Electronic Lexical Database, 1998, pp. 265- 283, MIT Press. [4] Baker, Collin F., Charles J. Fillmore, and John B.Lowe, “The Berkeley FrameNet Project”. In Proceedings of the 17th International Conference on Computational Linguistics, Montreal, Canada, 1998. [5] Palmer, Martha, Daniel Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 2005, 31(1):71–106. [6] Tversky, A. Features of similarity. Psychological review, 1977, 84(4):327. [7] Yuhua Li, Zuhair A.Bandar, and David McLean, An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources, IEEE Transactions on Knowlege and Data Engineering,vol 15, 2003,pp 871-882. [8] Girju, Roxana, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. SemEval-2007 Task 04: Classification of Semantic Relations between Nominals. In Proceedings of the Fourth International Workshop on Semantic Evaluations, pp. 13–18, Prague, Czech Republic. [9] Davidson, Dmitry and Ari Rappaport. Classification of Semantic Relationships between Nominals Using Pattern Clusters. In Proceedings of ACL-08: HLT, 2008, pp. 227–235, Columbus, Ohio. [10] Moldovan, Dan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. Models for the Semantic Classification of Noun Phrases. In HLT-NAACL 2004: Workshop on Computational Lexical Semantics, 2004, pp. 60–67. [11] Hirano, Toru, Yoshihiro Matsuo, and Genichiro Kikui. Detecting Semantic Relations between Named Entities in Text Using Contextual Features. In Proceedings of the 45th Annual Meeting of the ACL, Demo and Poster Sessions, 2007, pp. 157–160. [12] Szpakowicz, Barker, Ken Barker, and Stan Szpakowicz. Interactive semantic analysis of ClauseLevel Relationships. In Proceedings of the Second Conference of the Pacific ACL,1995, pp. 22–30. [13] Ted Pedersen, Serguei V.S. Pakhomov, Siddharth Patwardhan and Christopher G.Chute, Measures of semantic similarity and relatedness in the biomedical domain, Journal of Biomedical Informatics,vol 40, 2007, pp. 288-299. [14] Jiang J, Conrath D. Semantic similarity based on corpus statistics and lexical taxonomy. In: Proceedings of the 10th international conference on research in computational linguistics, 1997,pp. 19–33, Taipei, Taiwan. [15] Danushka Bollegara, Yutaka Matsuo, and Mitsuru Isizuka, Measuring Semantic Similarity between Words Using Web Search Engines, Proceedings of the 16th International World</s> |
<s>Wide Web Conference (WWW2007),pp. 757-766, Banff, Alberta, Canada, 2007. [16] Rudi L. Cilibrasi and Paul M.B. Vitanyi, The Google Similarity Distance, IEEE Transactions on Knowledge and Data Engineering, vol 19, 2007, pp. 370-383. [17] Evgeniy Gabrilovich and Shaul Markovitch, Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis, Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), 1606-1611, Hyderabad, India, 2007. [18] Michael Strube and Simone Paolo Ponzetto, WikiRelate! Computing Semantic Related-ness Using Wikipedia, Proceedings of the 21st National Conference on Artificial Intelligence, 2006, pp.1419-1424, Boston, Mass. [19] Das, A. and Bandyopadhyay, S. (2010). Semanticnet-perception of human pragmatics. In Proceedings of the 2nd Workshop on Cognitive Aspects of the Lexicon, pp. 2–11, Beijing, China. Coling 2010 Organizing Committee. [20] Kamrul Hayder, Naira Khan, and Mumit Khan, "Bangla WordNet Development Challenges and Solutions", Center for Research on Bangla Language Processing, October 8, 2007, BRAC University.</s> |
<s>Semantic Error Detection and Correction in Bangla SentenceSemantic Error Detection and Correction in Bangla SentenceM. F. Mridha*, Md. Abdul Hamid‡, Md. Mashod Rana€, Md. Eyaseen Arafat Khan£, Md. Masud Ahmed¥, Mohammad Tipu Sultan# Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh *firoz@uap-bd.edu, ‡ahamid@uap-bd.edu, €mashod0rana@gmail.com, £eyaseenarafatkhan08@gmail.com, ¥mdmasudrana81uap@gmail.com, #tipu07u5@gmail.com Abstract—Detection and correction of errors in Bengali text is essential. In general, Bengali text error can be classified into non-word error and semantic error (also known as context sensitive error). Till date, auto-correction for semantic error in Bengali sentence is challenging since there is no significant research works on this very topic. In this paper, we bring out the concept of Semantic Error detection and correction. We have developed a method that can detect and correct this kind of errors. Semantic error includes typographical error, grammatical errors, homophone errors, homonym error etc. Our goal to this study is to develop an approach to handle multiple semantic errors in a sentence. We have used our own built confused word list by edit distance and apply Naïve Bayes Classifier to detect and correct typographical and homophone error. For a candidate word from a sentence, we pick out a set of words which is a collection of confused words. We use all other neighbor words as features for each word from confusion set. Then we apply naïve theorem to calculate the probability and decide whether a target word is error or not. We have used 28,057 sentences to evaluate our model and we have achieved more than 90% accuracy. All data corpora used to evaluate the model are built by us. We strongly believe that the problem we have solved may shed light on the advancement of Bengali language processing significantly. Keywords-NLP; Naïve Bayes; Bangla; Semantic Error; Machine Learning. I. INTRODUCTION Writing is the most important way of communication for human. Writing represents the human language with sign and symbol and it works as a tool to make language readable. Writing is used as alternate of spoken language. Writing is not important only for communication but also for keeping records, publications, and storytelling etc. Writing helps us to passing history generation to generation, to maintaining culture etc. Every language has a style to represent it in textual form. Bengali is the 7th most spoken language in the world with around 250 million peoples, which has its own sign and symbol in textual representation. At present it is important to process a language with computer system. For official and non-official purpose, we do process our Bengali language in computerized system. Bengali language has critical grammatical rules and complex orthographical rules. That is why it is not soft to process Bengali language. When we are typing on the editor during chatting, mailing etc. usually we forget to maintain the rules of writing. That is why errors may occur frequently. And, importantly we cannot recognize them most of the time due to lack of knowledge. So, now it becomes a common expectation of auto correction</s> |
<s>in our text. Error occurs in different level like word level, sentence level etc. When we find error in our spelling of a word it says word level error. But, when an error occurs semantically, we know it as sentence level error. That means, when a word is correct but not appropriate for this sentence in that position, we say it semantic error. It is also known as context sensitive error. Kukich [1] says about many types of error like real word error and non-word error. It is easy to detect non-word error. But detection of real word error is not so easy. In Bengali language, it becomes more difficult to detect when an error occurs in semantic level. In Bengali language, context sensitive error can be categorized into homophone error, homonym error, typographical errors, grammatical errors etc. Here, we do focus our attention on typographical error. Typographical error also covers homophone error. Typographical error occurs during the typing. During typing, if we type extra character or miss to type character, it changes the word. But unfortunately, after change the word, if it remains correct then it destroys the context of the sentence. It may change the whole meaning of the sentence if error happens more than one. The following examples better explain this very important point. Example- সব বয্াংক েলা লুটপাট কের খাল(খািল) কেরেছ কারা পুিলেশর ( িল) েখেয় বেকর (যুবেকর)মৃতুয্ In the avobe sentences, some letter and sign are missing in underlined words, and they still are correct words, but the meaning of words are different. In the first sentence, like খাল means canal and খািল means empty, where the diffrence between them is one vowel sign. In the second sentence, the word for Joint 2019 8th International Conference on Informatics, Electronics & Vision (ICIEV) & 3rd International Conference on Imaging, Vision & Pattern Recognition (IVPR)978-1-7281-0788-2/19/1.00 ©2019 IEEE 184faeces, িল for bullet where the difference is a vowel sign; the word বেকর for the egret, যুবেকর for youth, wherein wrongly used word is missing যু . And this type of error is known as deletion error. Though the words are correct they change the meaning of the sentence and disturb the semantic structure. Correct sentence : সবাi ভাল কােজর জনয্ েনকী পােব Incorrect sentence: সবাi বাল (ভাল) aকােজর (কােজর) জনয্ েনকী পােব In the incorrect sentence in word বাল (Boy), ব take the place of the letter ভ ( ভাল, Good ) which known as replaced error. Homophone error can be occurred due to replaced error. In the word, aকােজর have extra letter a which is known as insertion error. These two errors are destroying the sentence. Also, there has others error as like as homophone error where two words pronunciation are same but spellings are different also meaning. Example, ধান (paddy), দান (donation); For the following sentence the word ধান are correct, not দান. দান (ধান) চাষ কর। When a set of words pronunciation and spellings are the same but meanings are different, then these are called</s> |
<s>homonym words and error caused for this type of words is known as homonym error. The following table contains the different types of error in Bengali sentence. TABLE I. DIFFERENT TYPES OF ERROR Sentences with different Error আমার পাছায়(ছাপায়) চুল (ভূল) িছল Replaced error: (পাছায়-In the back; ছাপায়-Print out), ( চুল-hair; ভূল-mistake) পরীkায় কৃতকাযর্কারীেক(aকৃতকাযর্কারীেক) পুর ৃ ত করা হয় না Deletion error: (aকৃতকাযর্কারীেক-Failure who; কৃতকাযর্কারীেক- The successor ) aসৎ (সৎ) েলাক সmােনর চািবদার (দািবদার) Insertion error: (aসৎ-dishonest; সৎ-honest) and Replaced error: ( চািবদার-key keeper; দািবদার-Claimant ) আিহংসা পরম ধমর্ (religion) মানুষ o প র ধমর্ (behavior) পৃথক Homonym error In this research, we are going to detect the sentence level error. We address the issue when a word is correct for the sentence but the sentence has lost its original expression, and how this word can be replaced by another appropriate word. We also solve the problem when an error occurs more than one in a single sentence. In this topic, many research works have been done in other languages mostly in English. However, in the Bengali language, very few works are done and these are not enriched. And to the best of our knowledge we are the first who are going to handle multiple semantic errors in a sentence by the help of Naive Bayes Classification. The rest of the paper is described as follows. Literature Review is presented in Section II. Section III contains the Proposed Method. Section IV will describe the whole methodologies in details. Handling Strategy of Multiple Error is presented in Section V. Section VI describes performance evaluation. The outcome of performance evaluation is described in Section VII. Section VIII concludes our work along with future research directions. II. RELATED WORK Many methods have been developed in NLP to solve the sentence-level error. The most popular methods are statistic based and rule-based approaches. In the rule-based approach, rules are made to solve the problem and rules are different for different languages. The statistical approach is quite more popular than rule-based approach due to its language independence. Yves, Andrew and Golding [2] have given a method to real word error in a sentence by combining Trigram and Bayes theorem. The trigram is used for POS tagging and Bayes theorem is used for feature extraction. M. Kim, S. Choi, H. Kwon [3] proposed a method which is the combination of Naive Bayes Classifier and Chi-Square methods to solve the context-sensitive error in Korean texts. They have tried to solve typographical error only. Islam et al. [4] [5] developed a trigram-based context-sensitive error by the help of the self-developed string similarity measure. Y. Bassil and Md. Alwani [6] proposed a method which is a blending of three algorithms and these are unigram, bigram and 5-gram to solve the context-sensitive error. Church and Gale [7] suggested the noisy channel for detection and correction for real word error which maintains the semantic characteristics of a sentence. In the Bengali language processing, exact works are not done yet for semantic error.</s> |