text
stringlengths 41
31.4k
|
---|
<s>/FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT</s> |
<s>/TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution</s> |
<s>[600 600] /PageSize [612.000 792.000]>> setpagedevice</s> |
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/336129284Assessment of Bangla Descriptive Answer Script DigitallyPreprint · September 2019DOI: 10.13140/RG.2.2.12203.39204CITATIONSREADS1835 authors, including:Some of the authors of this publication are also working on these related projects:Bangla Related Works View projectSecurity in Industry 4.0 View projectMd Gulzar HussainGreen University of Bangladesh19 PUBLICATIONS 11 CITATIONS SEE PROFILESumaiya KabirGreen University of Bangladesh13 PUBLICATIONS 11 CITATIONS SEE PROFILETamim Al MahmudAston University17 PUBLICATIONS 16 CITATIONS SEE PROFILEAyesha KhatunChittagong University of Engineering & Technology20 PUBLICATIONS 17 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Gulzar Hussain on 30 September 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/336129284_Assessment_of_Bangla_Descriptive_Answer_Script_Digitally?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/336129284_Assessment_of_Bangla_Descriptive_Answer_Script_Digitally?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Related-Works?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Security-in-Industry-40?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Green_University_of_Bangladesh?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumaiya_Kabir8?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Aston_University?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Tamim_Mahmud?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Chittagong_University_of_Engineering_Technology?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ayesha_Khatun3?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Gulzar_Hussain?enrichId=rgreq-776b182dce80b3e6bbc45c1df5963650-XXX&enrichSource=Y292ZXJQYWdlOzMzNjEyOTI4NDtBUzo4MDg4MjY0NjI4MTgzMDdAMTU2OTg1MDY2NDY1MA%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019Assessment of Bangla Descriptive Answer ScriptDigitallyMd Gulzar Hussain∗, Sumaiya Kabir†, Tamim Al Mahmud‡, Ayesha Khatun§, Md Jahidul Islam¶Department of Computer Science and Engineering∗Green University of Bangladesh, Dhaka-1207, Bangladeshgulzar.ace@gmail.com∗, sumaiya@cse.green.edu.bd†, tamim@cse.green.edu.bd‡,ayesha@cse.green.edu.bd§, jahidul.jnucse@gmail.com¶Abstract—Answer script evaluation is an essential part of thestudent evaluation process in the education system. In an exam,students need to answer subjective and objective questions. Ineducational institutes, instructors need to evaluate the answerscript manually to evaluate the students. In Bangladesh, thenumber of students and institutes are increasing day by day. Forthis reason, it is becoming hard to evaluate the answer script ina perfect way by the instructors. So it is necessary to find a wayto evaluate the answer script automatically. Many techniques areproposed for the English language, but we didn’t find any forBangla language. Our paper proposed a way to evaluate Banglasubjective answer scrips automatically by keyword matching andlinguistic analysis. Using our proposed model we have testedon answer scripts of 20 questions and found the minimumrelative error of 1.8%. 15 teachers and 10 students volunteeredto evaluate the answer scripts.Keywords: Bangla Language, Bangla Subjective Script, Evalu-ation, Keyword Matching, Automatic Evaluation, Answer Script.I. INTRODUCTIONTo test the skills and knowledge of individuals, examinationsystems are designed. There are various kinds of tests orexamination systems all over the world, such as multiplechoice questions, subjective, etc. Objective evaluation is away of questioning that has one correct answer. In the otherhand, subjective evaluation can have more than one correctanswer. Like other countries, Bangladesh also follows both ofthese evaluations. In Bangladesh there are around 23,907,151students overall in primary, secondary, post secondary levelof education in 2015 and primary language used in theseeducation levels are Bangla and English [1]. But there arenot enough teachers or instructors for those students. It isbecoming hard to evaluate these students for these teachers orinstructors. In 2015, the teacher & student ratio was 1:41 [2][3]. Teaching students, setting questions, evaluating the answerscript of the students is difficult for the teachers due to thisratio. If some of these tasks can be performed then it will makelife easier for the instructors. Not only in the education systembut also in the various job entrance examination is done in asubjective way. These answer scripts also need to be evaluated.So if we can asses the answer script automatically then theevaluation system will be more smooth for Bangladesh.The rest of the paper is organized as follows: Section</s> |
<s>II dis-cusses previous works. Methodology discussed in Section III.Result analysis and discussion demonstrated in Section IV.And finally Section V refers the conclusion.II. PREVIOUS WORKSGenerally, answer scripts evaluation is a difficult task. Itbecomes harder for Bangla language. Various works are foundfor English but none is found for Bangla Text. Here we willdiscuss some previous works to evaluate subjective answerscript.Authors of [4] suggest alternative sentence generatormethod to produce an alternative model response by linkingthe technique to a synonymous dictionary. In the matchingstage, they proposed combination of three algorithms, Com-mons Words (COW), Longest Common Sub-sequence (LCS)and Semantic Distance (SD), which were used effectivelyin many Natural Language Processing systems and producedeffective outcomes. Hyperspace Analog to Language (HAL)procedure and Self-Organizing Map (SOM) method is usedto evaluate students answer script in paper[5]. Kohonen Self-Organizing Maps clustering technique is applied to the vectorin their suggested system. Authors of [6] observed that thesemantic Enhanced NLP based technique outperforms easylexical matching techniques. This application mechanism of-fers an automatic answer assessment based on the keywordgiven by the moderator to the application in the form of theinput that will provide equal mark distribution.In paper [7], authors proposed a Natural Language Process-ing (NLP) based method for evaluating of the answer script.They used a keyword based summarizing technique to generatea summary of the answer script. Authors of [8] proposeda syntactical-relation based feature extraction technique toevaluate descriptive type answer scripts. Their method containssteps like question-classification, answer-classification, andanswer-evaluation of subjective answers of students and gradewith a suitable score. Advanced machine learning techniquesand methodologies based on a new model is proposed byPrakruthi et. al. in [9]. They did it for Optical Character Recog-nition based work involving supervised learning technique.An implementation that uses machine learning to evaluate theanswers scripts is proposed in paper [10].Authors of [11] proposed evaluation procedure in a semi-automated manner where subjective questions are supple-978-1-7281-5242-4/19 c©2019 IEEEmented with model answer points. Their suggested frameworkalso includes provisions of reward systems and penalties.III. METHODOLOGYWorking methodology for our proposed system is discussedin this section. Our system follows the given system diagramin Fig 1 to evaluate the Bangla answer script of the examinee.Fig. 1. System Workflow of proposed systemIn the system flow diagram, we can see that step ”KeywordGeneration” repeated for the analysis of question, students an-swer script, and answer collected from open & close domain.This is an important step in our proposed system. We canalso see that to evaluate the student’s answer, we searchedkeywords generated from question in the open domain andclosed domains. Here open domains include World WideWeb, various web pages, blogs, Wikipedia etc. and closeddomain consist of a specific category or specific answer to thequestion. A comparison algorithm is proposed for comparingour generated answer keywords and keywords generated fromthe student’s answer. These various steps are elaborated below:A. Keyword GenerationOur proposed system simply takes an answer from thestudents as a text or document file. These answers are writtenby students in a document file. We are not taking handwrittenanswer scripts in consideration due to reducing the complexity.In this step, we process the text very carefully. Sub steps</s> |
<s>ofFig 2 are taken in this step.Fig. 2. Sub-steps in the Keyword Generation1) Preprocessing: As answers collected from various do-mains and students contain many unnecessary words, stopsymbols, punctuation symbols, emotions symbols etc. Theseare noise for our unprocessed data. We remove these specialcharacters like #, &, % etc. emojis like :), :P etc, stop wordsetc. We also remove the articles, punctuations etc. to simplifythe text. After preprocessing keywords are generated and weuse the following sub-steps to generate their frequency.2) Keyword Frequency Generation: After preprocessing allthe text data we need to identify every keyword and counttheir frequency ratio. To do that we used algorithm 1.Algorithm 1: Algorithm to Generate Keyword FrequencyAnswer = Answer Script after preprocessing;Frequency Array = NULL;Number Keywords = Number of keywords in theAnswer;for Words in Answer doif Word found in Frequency Array thenFrequency Array[word] =Frequency Array[word] + 1Number Keywords ;elseIn the Frequency Array add that word as index ;Frequency Array[word] = 1Number Keywords ;endendFrequency Array is formed after applying algorithm 1 wherefrequency of every word is calculated. There will be no stopwords, punctuation, irrelevant words in this array as all ofthem will be removed after preprocessing.B. Searching & Collecting Answers Using KeywordsTo assessment, student answer system will need a standardanswer for a question. Answers can be from an open orclose domain. An instructor can provide the answer manuallyor system can collect the answer from the internet or otherresources. Our system will parse data from online knowledge-based websites like Wikipedia etc. The MediaWiki action APIis a good example of web service that provides access tocertain wiki features such as authentication, page operations,and search where the entry point is bn.wikipedia.org/w/api.phpor many others. Our proposed system will collect data depend-ing on the keywords from the World Wide Web or any otherlocal resources. As an example, if the question is Bangladeshsampark likhuna (Write about Bangladesh.) then it will searchabout Bangladesh and collect answers from these domains.C. Comparison Algorithm to compare Two ResultsWhen keywords and their frequency ratio is calculated fora student’s answer script and searched answer, then we needto compare them for scoring the student’s answer. To do thatalgorithm 2 is proposed.Algorithm 2: Comparison AlgorithmKeywordScore = 0;SAFrequency = Frequency Result Of Student’s Answer;SRFrequency = Frequency Result Of Searched Answer;LengthSR = Size of SRFrequency;WeightSA = Empty Array;Sort SRFrequency in decending order;for word in SAFrequency doif word is in SRFrequency then’word’ added as index in WeightSA;if SAFrequency[word] > SRFrequency[word]thenWeightSA[word] = 1;elseWeightSA[word] = SAFrewuency[word]SRFrequency[word] ;endSAFrewuency[word] = 0;SRFrequency[word] = 0;elseendendNeededWord = 0;for word in SRFrequency doif SRFrequency[word] ! = 0 thenWeightSA[word] = −1 ∗ SRFrequency[word];SRFrequency[word] = 0;NeededWord+ = 1;elseendendUnnecessaryWord = 0;for word in SAFrequency doif SAFrequency[word] ! = 0 thenWeightSA[word] = −1 ∗ SAFrequency[word];SAFrequency[word] = 0;UnnecessaryWord+ = 1;elseendendD. Grammatical & Spelling Mistake AnalysisGrammatical and Spelling mistake is very important forevaluating any kind of answer script. It is also importantbecause students can write just the keywords required for acorrect answer. If grammatical mistakes are checked then itwill come to the knowledge of the system that the student istried to cheat. We can see in the evaluation process grammarand</s> |
<s>spelling checking is done in many proposed method. Butas we are working with Bangla language in which case suchresources are limited and we are using Akkhor Bangla Spelland Grammar Checker [12] in our proposed method, whichone is a tool to check spelling and grammatical mistakes.Hence we are proposing a simple keyword-frequency-basedprocess with the use of this tool. This tool will return a scoredepending on the grammatical and spelling mistakes. Thatscore will help the evaluation system to evaluate the student’sanswer script. The score is calculated based on algorithm 3.Algorithm 3: Algorithm for Scoring Grammatical andSpelling MistakesNumberOfWord = Number of total words in StudentAnswer Script;NumberOfSentence = Number of total words in StudentAnswer Script;SMistakes = 0;GMistakes = 0;TGSMScore = Total Grammatical & Spelling MistakeScore which is initially 0;for Words in Student Answer Script doif word is in Student Answer Frequency thenSMistakes = WeightSA[word]NumberOfWord + SMistakes;elseSMistakes = 1NumberOfWord + SMistakes;endendfor Sentences in Student Answer Script doGMistakes = 1NumberOfSentence +GMistakes;endTGSMScore = SMistakes+GMistakesSMistakes∗GMistakesE. Mark Assigning to StudentsAfter performing algorithm 2 and 3, we will get some valuesof parameters LengthSR, NeededWord, Un-necessaryWord,WeightSA, and TGSMScore. We need to assign marks to thestudents depending on the answer script. These values will beused in the grading algorithm step to assign mark to the studentdepending on his or her answer script. In the grading algorithmwe are considering that out of the total mark 80% will beallocated to the answer and 20% will be for the grammaticaland spelling mistakes. Exam authority will be able to changethe value as their need.In this step, calculation of the final mark of a student is doneusing the weights which are set in the comparison algorithmand score obtain from algorithm 3. Mark assignment is doneby following grading algorithm 4. This algorithm takes con-sideration the score obtain from total grammatical and spellingmistakes and the number of necessary and unnecessary wordsto assign the mark to the answer.Algorithm 4: Grading Algorithm to Assign MarkStudentMark = 0;FullMark = N;ActualMark = 0.8 ∗N ;GSMark = 0.2 ∗N ;for Words in WeightSA doif WeightSA[word] is positive thenActualMark =(N ∗WeightSA[word]) +ActualMark;elseActualMark =WeightSA[word]NeededWord+UnnecessaryWrod +ActualMark;endendGSMark = TGSMScore ∗GSMark;FullMark = ActualMark + GSMark;TABLE IABSOLUTE AND RELATIVE ERRORS FOR PROPOSED SYSTEMNumberTeachersAverageScoregiven byTeachersScoregivenby thesystemAbsoluteErrorRelativeError5 8.56 8.1 0.46 5.37%10 8.25 8.4 0.15 1.81%15 8.4 8.7 0.3 3.57%IV. RESULT ANALYSISFor experimenting our system, we gave 20 questions for an-swering to 10 students. Every students answered two questionsand each answer contain 300-350 Bangla words. Each of theanswer scripts are evaluated by our volunteered teachers andassigned a score to each of the answer. Each of the questionswas of 10 marks. We also evaluate the answer scripts usingour proposed system. Based on the scores, we have calculatedthe Absolute and Relative Errors, and found table I.We calculated the Absolute error using the formula 1 andrelative error using formula 2.AbsoluteError = ActualV alue−MeasuredV alue ...(1)RelativeError = AbsoluteErrorActualV alue ∗ 100 ............................(2)Where in formula 1 actual value is the score given by theteachers and measured value is the score given by the system.We also find the relative error of answer script of each or thequestions are given</s> |
<s>in fig 3 which is the graph of QuestionsVs Relative error.As resources in Bangla Language are hard to find and thereis no work found in past our work is just an initiative. But therelative error which is observed in our system can be acceptedas it is just the beginning. We hope to improve our work inthe future. In the future implementation of the whole idea isneeded to make it automated and faster. New parameters canbe added to the system to make it more reliable and synonymsof words can be checked. Machine Learning algorithms canbe used to make the system more efficient and effective. Our3.572.573.556.994.581.982.964.228.51.573.561.91.583.61.886.95.84 5.662.32.71 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20QUESTIONSFig. 3. Questions Vs Relative errorproposed method should be implemented for various type ofsentences. Input should be taken from handwritten answerscripts to make it more applicable in the real word.V. CONCLUSIONWe proposed a basic method to evaluate Bangla subjectiveanswer script for the first time. The relative error is under10% of our proposed methodology which is quite satisfactory.As it is just the beginning, there are still many options orways to improve the experimental methodology we proposed.We know that working for the Bangla language is moredifficult then English language due to its minimum amountof resources. We hope our proposed system will save sometimes and utilize the current resources.REFERENCES[1] Wikipedia. (2015) Education in bangladesh. [Online]. Available:https://en.wikipedia.org/wiki/Education in Bangladesh[2] BANBEIS. (2017) Bangladesh education statistics 2016. [Online].Available: http://lib.banbeis.gov.bd/[3] AsiaNewsNetwork. (2017) Teacher-student ratio worsens in bangladesh.[Online]. Available: http://annx.asianews.network/content/teacher-student-ratio-worsens-bangladesh-46376[4] A. Benomran and M. Ab Aziz, “Automatic essay grading system forshort answers in english language,” Journal of Computer Science, vol. 9,pp. 1369–1382, 09 2013.[5] K. Meena and L. Raj, “Evaluation of the descriptive type answers usinghyperspace analog to language and self-organizing map,” in 2014 IEEEInternational Conference on Computational Intelligence and ComputingResearch, Dec 2014, pp. 1–5.[6] K. Wangchuk, “Automatic answer evaluation: Nlp approach,” 05 2016.[7] M. Rahman and F. Hasan Siddiqui, “Nlp-based automatic answer scriptevaluation,” vol. 4, pp. 35–42, 12 2018.[8] V. Nandini and P. Uma Maheswari, “Automatic assessment of descriptiveanswers in online examination system using semantic relational fea-tures,” The Journal of Supercomputing, 05 2018.[9] M. Prakruthi S T, “Automated students answer scripts evaluation systemusing advanced machine learning techniques,” International Journal forResearch in Applied Science and Engineering Technology, vol. 6, pp.1794–1797, 06 2018.[10] P. Sinha and A. Kaul, “Answer evaluation using machine learning,” 032018.[11] C. Roy and C. Chaudhuri, “Case based modeling of answer points toexpedite semi-automated evaluation of subjective papers,” in 2018 IEEE8th International Advance Computing Conference (IACC), Dec 2018, pp.85–90.[12] Akkhor. (2019) Akkhor bangla spell and grammar checker. [Online].Available: http://www.akkhorbangla.com/View publication statsView publication statshttps://www.researchgate.net/publication/336129284</s> |
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/331158767Question Bank Similarity Searching System (QB3S) Using NLP and InformationRetrieval TechniqueConference Paper · May 2019DOI: 10.1109/ICASERT.2019.8934449CITATIONSREADS4082 authors:Some of the authors of this publication are also working on these related projects:National Health Data Warehouse View projectProblem-based eLearning (PBeL) View projectMd Raihan MiaBangladesh University of Engineering and Technology4 PUBLICATIONS 0 CITATIONS SEE PROFILEAbu Sayed Latiful HaqueBangladesh University of Engineering and Technology52 PUBLICATIONS 295 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Raihan Mia on 17 February 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/331158767_Question_Bank_Similarity_Searching_System_QB3S_Using_NLP_and_Information_Retrieval_Technique?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/331158767_Question_Bank_Similarity_Searching_System_QB3S_Using_NLP_and_Information_Retrieval_Technique?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/National-Health-Data-Warehouse?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Problem-based-eLearning-PBeL?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Raihan_Mia?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Raihan_Mia?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Bangladesh_University_of_Engineering_and_Technology?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Raihan_Mia?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Latiful_Haque?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Latiful_Haque?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Bangladesh_University_of_Engineering_and_Technology?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Latiful_Haque?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Raihan_Mia?enrichId=rgreq-9f0598eefd6e2b07f2a3623d139f1f00-XXX&enrichSource=Y292ZXJQYWdlOzMzMTE1ODc2NztBUzo3MjcxODQzNTY0MzgwMTZAMTU1MDM4NTY2OTc1OQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfQuestion Bank Similarity Searching System (QB3S)Using NLP and Information Retrieval TechniqueMd. Raihan MiaDepartment of Computer Science and EngineeringBangladesh University of Engineering and TechnologyDhaka, BangladeshEmail: 1305116.mrm@ugrad.cse.buet.ac.bdAbu Sayed Md. Latiful HoqueDepartment of Computer Science and EngineeringBangladesh University of Engineering and TechnologyDhaka, BangladeshEmail: asmlatifulhoque@cse.buet.ac.bdAbstract—Problem Based e-learning(PBeL) in bangla languageis one of the most progressing areas of the use of ICT ineducation. Question Bank(QB) is the main component of anyPBeL system. Searching similarity in the complex structure of QBis a challenging task in the development of PBeL system. We havebeen developed an efficient Question Bank Similarity SearchingSystem(QB3S) to find similar questions, handle duplicate ques-tion and rank search result of a query input based on NLPand Information Retrieval techniques. QB3S has four modules:bangla documents processing, question structure analysis andclustered indexing by B+ tree , word-net construction andInformation retrieval module. Lexical analysis, stemming by finiteautomata rules and stopwords removing have been used forbangla document processing. The most challenging proceduresof QB3S were Analyzing the structure of data for clusteredindexing in the sorted sequential file of the QB DataBase(DB)with a B+ tree data structure and improved TF-IDF algorithmwith weighted functionality. A Word-net has been used forhandling synonyms. Vector Space Model(VSM) has been designedfrom the value of TF-IDF weighted matrix. By using cosinesimilarity product rule, we have been Calculated the similarityvalue between the query input and all mcq of DB from VSM.QB3S has been evaluated in some experimental dataset to findresults by imposing different test cases. The accuracy of searchingperformance which has found to be satisfactory.Index Terms—Tokenization, Stopwords Removing, Stemming,Clustered Indexing, B+ tree, Word-Net, TF-IDF, VSM, CosineSimilarityI. INTRODUCTIONProblem-Based eLearning(PBeL) systems have been provedto be effective in blended learning in classroom and also inasynchronous learning. The PBeL system [1] developed forICT course in Higher Secondary School(HSC) level containsa rich question bank of different types and complexities. Itis utmost requirement of the QB to contain only distinctquestions in terms of title, contents and answers. This requiresa rigorous similarity search to remove the possible duplicatequestions. In this research, we have developed a similaritysearching system on complex structures of QB using NLPand IR technique. Beside duplicate handling, it also can beranked searching results of a query input based on similarityvalues.Information Retrieval (IR) [2] can be defined as a set oftechniques and tools dealing with access to information froma set of unstructured documents database in order to allowthe user to retrieve the correct information. IR algorithms forfetching relevant information from large amount of structureddata of QB database where data</s> |
<s>is represented in tabularform, need to strategically better searching techniques.We have used NLP tools [3] like, Tokenization or lexicalanalysis of bangla text corpus [4], StopWord removing,Rule-based sementic analysis or stemming to modify inflectedword and analyzed the structured data to build a dynamic B+tree clustered indexing to speed up the retrieval of records. Tobuild the generative laexicon [5] for bangla language, manypaper had been already published in the field of computationallinguistics morphology. The stem need not be identical tothe morphological root of the word; it is usually sufficientthat related words map to the same stem, even if this stemis not in itself a valid root. Algorithms for stemming havebeen studied in computer science since the 1960s. In [6],A rule-based approach of stemming bangla word had beenfound an excellence solution. The addition of inflectionalsuffix; derivational suffix and agglutination in compoundwords make Morphological Parsing fairly complex for theBangla. There are existing efforts at building a completemorphological parser for Bangla [7], where experimentshave been carried out with two types of algorithms: simplesuffix stripping algorithm and score based stemming clusteridentification algorithm. Another rule-based intelligent Banglaparser is to ease the task of handling semantic issues in thesubsequent stages in machine translation [8].B+ tree data structure for indexing Object Oriented Databasehas been found time efficient searching, inserting and deletingalgorithms in [9]. Clustered Metric Tree(CM-Tree) had beenconstructed for a dynamic clustered index in unstructuredmetric database for searching similarities [10]. Variations ofthe TF-IDF(Term Frequency - Inverse Document Frequency)weighting scheme are often used by IR techniques as acentral tool in scoring and ranking a documents relevancegiven a user query [11]. Many TF-IDF weights functionhas been developed for verious IR applications like, animproved TF-IDF weights function is proposed which usesthe distribution information among classes and inside aclass [12]. Improved version of a well functional weightingscheme of TF-IDF algorithm based on question title or optionor answer terms, Vector Space Model [13] for representingdocuments as a vector in t-dimension (If the documentcontain t no. of terms) and find the similarity with queryvector using cosine similarity product rules are the main areaof IR techniques used in QB3S. A hash-mapping WordNetis used to handle same meaning words or synonyms. Mainobjective of our research showing below:• To improve in a certain level of Bangla documentsprocessing tools like tokenization, stopwords removing,stemming to accomplish the processing of QB data-set.• To analyze the structure of data and indexing in sortedsequential records(Clustered Indexing) of QB databasefor faster access by using B+ tree data structure.• To construct a hash mapping word-net to handle syn-onyms.• To improve a weighted functional TF-IDF algorithm.The rest of the paper has been organized as follows.Section II describes the structure of QB[14-17]. The systemarchitecture for QB3S has been given in section III. SectionIV shows the results obtained after the application of QB3Sin the existing eLearning system [1]. A detailed discussionson the results has also been given in this section. Finally theconclusion is given in section V.II. QUESTION BANK STRUCTURE AND E-LEARNING OFICT COURSEA sequence of papers [14-17] were published in journalsor conference towards improving PBeL in ICT</s> |
<s>course ofsecondary and higher secondary level of Bangladesh. Prob-lem based e-learning system (http://epbl.org) for interactivelearning of ICT course, practicing and model examinationfrom question bank and C programming, HTML, SQL, ERDlearning based on higher secondary book have already beenimplemented successfully. Question bank data are importantresource for Problem Based e-Learning(PBeL) which requiresa search engine to retrieve query information from the largescale of data and this can be used by the admin of PBeL systemto find the duplicate question, user can find out searching resultof the relevant query input.Let us take a closer look at the information and utilities avail-able in a bangla question bank database. It can be Categorizedthe Multiple choice questions in three classes:1) Cognitive class:The simplest structure where there are a question titleand 4 options.Like,2) Analytical class:It contains three option and the scope of answer is thefour combinations of these options.As example,3) Higher Ability class: One of the most complexstructure and scenario based questions.There aremay be two or three Cognitive or Analyticalmcq question based on a scenario. As example,Orgatization of stroed question in database schema is show-ing in Fig. 1. We need to analysis the structure of questionand cluster it with three decision boundaries of three differentclasses will be described in sec. III.Fig. 1. Organization of questions in databaseIII. QB3S SYSTEM ARCHITECTURETheoretical analysis of the body of methods and principlesassociated with branches of procedure. QB3S has taken aninput mcq or part of a mcq, processing data and performingsearch for finding the maximum similar mcq from entire QBDB. See Fig. 2, showing the architecture of system in termsof process flow. The whole process can be split into fourmodules:1) Question structure analysis and clustering module2) Document processing module3) WordNet module4) IR moduleFig. 2. System Architecture of QB3SA. Question Structure analysis and clusteringIn sec. II, we have been already discussed about thestructure of question bank data and categorized them in threeclasses. Criteria of classification based on structure of questionis showing in Fig. 3.Fig. 3. Question structure analysisPBeL system of ICT course database schema has been de-signed such a way that a table allocated for each class. Eachtable for a specific class contains identical questions of allchapters.Clustered indexing by B+ tree:B+ tree data structure has been used to track the clusteredindex(primary key on sorted order) of database table. B+ treeefficiently reduce the access time of search space in databaseof QB. In case of ICT course QB, it has 6 chapters and 3classes of question. Primary key of table has been maintainedas,We had been compelled to track 18 indexes in the leavesnode, each one pointed to the starting key of sorted data anddegree of the B+ tree 6 in this case(see Fig. 4). So the heightof tree ceil(log(18)/log(6/2)) = 2.During the search operation, h nodes are read from the diskto the main memory where h is the height of the B+ treeand h = logt(n) , where n is the number of the keys storedin the tree and t is size of a block or node . In addition ofthe disk reads, B+ tree searching</s> |
<s>algorithm performs a linearsearch in every node read from the disk. The time complexityof each linear search is O(t) . Thus, the total time complexityof the B + tree search operation is O(tlogtn) and same timecomplexity for inserting and deleting [18].In case of higher number of chapter, it needs higher orderand for adding other courses in QB, extra level will be addeddynamically by imposing precondition [19].Fig. 4. B+ tree structure for clustered indexingB. Bangla Document ProcessingIn the field of Bangla language processing, we have beenalready discussed the existing work of tokenization, Stemmingand stop words removing in Sec. I. Now we are going toexplained the tools of language processing which have beenused in QB3S.1) Tokenizer: Tokenization is the process of breaking up thegiven text into units called tokens.Tokenization refers lexicalanalysis in natural language processing.We use a standard tokenizer, which splits text into termson word boundaries using white space, as defined by theUnicode Text Segmentation algorithm [20]. It removes mostpunctuation symbols. Array-list data structure has been usedto store the terms after tokenization.2) Stemmer: A very simple-to-use rule based stemmer [6]for Bangla word parsing is used for semantic analysis. Whichtake a inflected word as input and output the correspondingroot stem. Some common rules which is used for semanticanalysis showing below and See examples of common rulestext file contains in Fig. III-B2.X #When X appears at the end of a word,remove itY → Z #When Y appears at the end of a word,replace it with ZY.Z → A.B #When Y, followed by some character a,followed by Z appears at the end of a word,replace it with AaBFig. 5. Examples of some common rules for stemming3) Stop words Remover: ’Stop Words’ usually refers to themost common words in a language.There is no single universallist of bangla stop words. We build stop words list in followedway:1) From some pre defined words2) From the most frequently occurring word in QuestionBank databaseAll stop words are removed from the array-list by using UTF-8string matching library in java [21].C. Word-Net moduleAs you know, synonyms are words that have similar mean-ings. A synonym set, or synset, is a group of synonyms. Asynset, therefore, corresponds to an abstract concept.We havebeen constructed a runtime hash mapping from text file whichcontains bangla synonym dictionary(see Fig. 6).Fig. 6. Bangla synonym dictionary txt fileStep of hash-mapping word-net construction:1) Take input from bangla dictionary text file2) Tokeniztion3) Stop-word removing4) Stemming5) Use hash function, f(n) = charAt(0) mod 128 forfinding hash value (total 128 bangla character,unicodevalue 0980H-09FFH).Hash-map of the first line of Fig. 6 showing in Fig. 7.Searching time complexity : O(1)Fig. 7. Representation of dictionary hash-mapD. Information Retrieval module1) Improved version of Weighted TF-IDF: Term Fre-quency(TF) measures how frequently a term occurs in adocument.Suppose we have a query or input mcq consist-ing terms, t1, t2, t3, t4.......tn and there are many documentsmcq1,mcq2,mcq3......mcqm where the tf-idf will be per-formed.Then,TF (t,mcqj) =f(t,mcqj)∑Tξmcqjf(T,mcqj)(1)Where, f(t,mcqj) = Numbers of time Term t appear in mcqjInverse Document Frequency(IDF) Measures how importanta term is. Log base e of ratio between total number ofdocuments and</s> |
<s>Number of documents with that term t in it.IDF (t) = ln(2)Where, N = Total number of documents And n = Number ofdocuments with term tWe have been developed an improved version of thisalgorithm where we used some pre-defined weighting factorbased on the classification of question structure.Weighting Factor,Where, w1> w2>w3So tf-idf equation with pre-define weighting factor as like,TF-IDFimproved(t,mcqj) =f(t,mcqj)∗W (t)∑Tξmcqjf(T,mcqj)∗W (T ) ∗ lnn (3)2) Vector space Model Representation from TF-IDF: Vec-tor space model or term vector model [22] is an algebraicmodel for representing text documents as vectors of identifierslike index terms. Formally, a vector space is defined by aset of linearly independent basis vectors.The basis vectorscorrespond to the dimensions or directions of the vector space.The basis vectors are linearly independent because knowing avectors value on one dimension does not say anything aboutits value along another dimension. Documents and queries arerepresented as vectors.dj = (w1,j , w2,j , . . . , wt,j)q = (w1,q, w2,q, . . . , wn,q)Each dimension corresponds to a separate term. If a termoccurs in the document, its value in the vector is non-zero.Each dimension corresponds to a separate term. If a termoccurs in the document, its value in the vector is non-zero.Graphical representation of vector space in Fig. 8.Fig. 8. Vector space modelVector Space Model (VSM) is a typical method to describethe text feature in text classification at present. It adoptsTF-IDF weights to compute the term weighting in eachdimension of the text feature. However, it only considers therelationship between the term and the whole text but neglectsthe relationship between different terms. VSM has been usedfor representing tf-idf weight of a mcq as a vector in thisspace.Let’s assume we have a query mcq ’Q’ like,After processing it using the method described in Subsec. III-B’Q’ will be transformed into:Query ’Q’ can be represented in the 7-dimensional vectorspace. Calculated the value of TF-IDF in a wighting matrixwhere each row represents a vector of a document in thespace of query document.After performing improvedTF-IDFalgorithm the weighting matrix is showing in Table I.TABLE IVECTOR REPRESENTATION FROM TF-IDF VALUE IN VSMDoc term1 term2 term3 term4 term5 term6 term7mcq1 0.0 0.05 0.05 0.05 0.05 0.05 0.05mcq2 0.05 0.08 0.0 0.01 0.0 0.03 0.0mcq3 0.0 0.12 0.22 0.33 0.44 0.55 0.83mcq4 0.0 0.0 0.0 0.0 0.0 0.0 0.0mcq5 0.0 0.0 0.03 0.03 0.03 0.03 0.03mcq6 0.09 0.0 0.11 0.1 0.1 0.1 0.13) Similarities measurement using Cosine Similarity: Co-sine similarity is a measure of similarity between two non-zerovectors of an inner product space that measures the cosineof the angle between them. This metric is a measurement oforientation and not magnitude, it can be seen as a comparisonbetween documents on a normalized space because were nottaking into the consideration only the magnitude of each wordcount (TF-IDF) of each document, but the angle between thedocuments. To assign a numeric score to a document for aquery, the model measures the similarity between the queryvector and the document vector. The similarity between twovectors is once again not inherent in the model. Typically, theangle between two vectors is</s> |
<s>used as a measure of divergencebetween the vectors, and cosine of the angle is used as thenumeric similarity [23]. Cosine has the nice property that itis 1.0 for identical vectors and 0.0 for orthogonal vector. If~D is the document vector and ~Q is the query vector, then thesimilarity of document D to query Q (or score of D for Q )can be represented as:sim( ~D, ~Q) = cos θ =~D · ~Q‖ ~D‖‖ ~Q‖i=1DiQi√∑ni=1D√∑ni=1Q(4)Cosine Similarity will generate a metric that says how relatedare two documents by looking at the angle instead of mag-nitude, like: Using the eq. 4, we can find out the similarityFig. 9. Cosine distance between ~D and ~Q, where 0 ≤ cosθ ≤ 1between Query vector and other mcqs in VSM showed intable. I .The Cosθ values for different mcqs or docs in range of1 (Most Similar) and 0 (less similar) and ranking(see table. II)from calculated cosine value.TABLE IIRANKING FROM COSINE VALUEDoc CosineV aluemcq1 0.93mcq3 0.81mcq2 0.64mcq5 0.32mcq6 0.24mcq4 0.00IV. RESULT AND DISCUSSIONQB3S has been evaluated its first experiment in a multiplechoice question bank(ICT) database. It contains total 1289mcq. Information of experimental data-set is showing intable. III.TABLE IIIEXPERIMENTAL DATA-SETChapter ID Chapter Name Total Questionc11001001 Information and Communication Technology 304c11001002 Communication Systems and Networking 245c11001003 Number system and digital device 273c11001004 Web Design Contacts and HTML 166c11001005 Programming language 152c11001006 Database management system 149We implied two test case with present or absent of conditionand analysis the result graphically. Accuracy of searchingperformance, Sensitivity and specificity have been calculatedfrom the value of True Positive( detects the condition whenthe condition is present ), False Negative ( does not detectthe condition when the condition is present ), True Negative(does not detect the condition when the condition is absent )and False Positive( detects the condition when the conditionis absent ) [24].Test 1: Select 50 existing full or part of a mcq as query inputsfrom ’Communication Systems and Networking’ chapter andfound the maximum cosine valued mcq. Graphical resultshowing in Fig. 10.Test result can be True Positive(TP) or False Negative(FN)depends on the value of cosine similarity.In case of TP, cosinevalue greater than 0.5, it detects the condition where querymcq belongs to this chapter is satisfied. otherwise FN.Fig. 10. Result of test 1 where 42 True Positive(TP) and 8 False Negative(FN)value was detectedTest 2: Select 50 full or part of a mcq as query inputs fromother chapter except ”Communication Systems and Network-ing” chapter(Not exist in this chapter) and find the mahap-terximum cosine value holder mcq.Graphical result showingin Fig. 11.Test result can be True Negative(TN) or False Positive(FP)depends on Cosine value. In case of TP, cosine value lessthan 0.5, it detects the condition where query mcq doesn’tbelongs to this chapter, otherwise FP.In this experiment of QB3S, we found TP=42, FN=8,TN=43, FP=7Accuracy = TP+TNTP+FN+TN+FP = 42+4342+8+43+7 = 85%Fig. 11. Result of test 2 where 7 False Positive(FP) and 43 True Negative(TN)value was detected.Sensitivity = TPTP+FN = 4242+8 = 84%Specificity = TNTN+FP = 4343+7 = 86%V. CONCLUSIONInformation retrieval techniques are effective in findingsimilar documents from a text database. These</s> |
<s>techniques arenot efficient in finding the similarity of questions in a QuestionBank developed for PBeL systems for many reasons: i) thedocument structure is different in nature that the conventionaltext documents, and ii) the contents of the different parts ofthe documents have different weights.In this paper, we have analyzed the structure and weight ofthe different types of questions in the QB and developed animproved TF-IDF algorithm based on the weight. At the initialstage we applied NLP tools and techniques like tokenization,stemming, stopword removal in the bangla text. Handlingsynonym is critical in any similarity searching system. Wehave developed a WordNet based on hash technique techniquecount of of synonym in TF-IDF. We have created a the VectorSpace Model using the improved TF-IDF weighted matrix. Forfaster and efficient access to the QB DB, we have used a B+tree index structure.Using the above techniques, we have developed a QuestionBank Similarity Searching System(QB3S). We have appliedQB3S in a real life dataset emerging from the PBeL systemsfor ICT course of HSC level in Bangla. We have achieved anaccuracy level of 85% for similarity search in the QB.REFERENCES[1] G. M. M. Bashir, A. Latiful Haque, and B. Chandra Dev Nath, E-learningof php based on the solutions of real-life problems, vol. 3, 12 2015.[2] Baeza-Yates, Ricardo, and Berthier Ribeiro-Neto. Modern informationretrieval. Vol. 463. New York: ACM press, 1999.[3] Chowdhury, Gobinda G. ”Natural language processing.” Annual reviewof information science and technology 37.1 (2003): 51-89.[4] F. Alam, S. Habib, and M. Khan, Text normalization system for bangla,tech. rep., BRAC University, 2008.[5] J. Pustejovsky, The generative lexicon, Computational linguistics , vol.17, no. 4, pp. 409441, 1991.[6] S. Das and P. Mitra, A rule-based approach of stemming for inflectionaland derivational words in bengali, in Students Technology Symposium(TechSym), 2011 IEEE , pp. 134 136, IEEE, 2011.[7] A. Das and S. Bandyopadhyay, Morphological stemming cluster identi-fication for bangla, Knowledge Sharing Event-1: Task , vol. 3, 2010.[8] G. K. Saha, Parsing bengali text: An intelligent approach, Ubiquity ,vol. 2006, no. April, p. 1, 2006.[9] Ramaswamy, Sridhar, and Paris C. Kanellakis. ”OODB indexing byclass-division.” ACM SIGMOD Record. Vol. 24. No. 2. ACM, 1995.[10] Aronovich, Lior, and Israel Spiegler. ”CM-tree: A dynamic clusteredindex for similarity search in metric databases.” Data KnowledgeEngineering 63.3 (2007): 919-946.[11] N. Wang, P. Wang, and B. Zhang, An improved tf-idf weights func-tion based on information theory, in Computer and CommunicationTechnologies in Agriculture Engineering (CCTAE), 2010 InternationalConference On , vol. 3, pp. 439441, IEEE, 2010.[12] Ramos, Juan. ”Using TF-IDF to determine word relevance in documentqueries.” Proceedings of the first instructional conference on machinelearning. Vol. 242. 2003.[13] Turney, Peter D., and Patrick Pantel. ”From frequency to meaning:Vector space models of semantics.” Journal of artificial intelligenceresearch 37 (2010): 141-188.[14] A. Habib and A. L. Hoque, Towards mobile based e-learning inbangladesh: A frame- work, in Computer and Information Technology(ICCIT), 2010 13th International Con- ference on , pp. 300305, IEEE,2010.[15] A. Hoque, M. M. Islam, M. I. Hossain, and M. F. Ahmed, Problem-basede-learning and eval-uation system for database design and programmingin sql, International Journal of E-Education, EBusiness, E-Managementand E-Learning-IC4E</s> |
<s>, pp. 537542, 2013.[16] A. S. M. L. Hoque, G. M. M. Bashir, and M. R. Uddin, Equivalenceof problems in problem based e-learning of database, in Technology forEducation (T4E), 2014 IEEE Sixth International Conference on , pp.106109, IEEE, 2014.[17] G. M. M. Bashir and A. S. M. L. Hoque, An effective learning andteaching model for programming languages, Journal of Computers inEducation , vol. 3, no. 4, pp. 413437, 2016.[18] Pollari-Malmi, Kerttu, and Eljas Soisalon-Soininen. ”Concurrency con-trol and i/o-optimality in bulk insertion.” International Symposium onString Processing and Information Retrieval. Springer, Berlin, Heidel-berg, 2004.[19] Bruso, Kelsey L., and James M. Plasek. ”Dynamic preconditioning ofA B+ tree.” U.S. Patent No. 7,809,759. 5 Oct. 2010.[20] Davis, Mark, and L. Iancu. ”Unicode text segmentation.” UnicodeStandard Annex 29 (2012).[21] Duerst, Martin. ”The properties and promises of UTF-8.” Proc. 11thInternational Unicode Conference, San Jose. 1997.[22] Lee, Dik L., Huei Chuang, and Kent Seamons. ”Document ranking andthe vector-space model.” IEEE software 14.2 (1997): 67-75.[23] A. Singhal et al. , Modern information retrieval: A brief overview, IEEEData Eng. Bull. , vol. 24, no. 4, pp. 3543, 2001.[24] D. M. Powers, Evaluation: from precision, recall and f-measure to roc,informedness, markedness and correlation, 2011.View publication statsView publication statshttps://www.researchgate.net/publication/331158767</s> |
<s>Microsoft Word - 16nov QA paper.docxSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/329972144Improving Answer Extraction For Bangali Q/A System Using Anaphora-Cataphora ResolutionConference Paper · December 2018DOI: 10.1109/CIET.2018.8660888CITATIONSREADS2193 authors:Some of the authors of this publication are also working on these related projects:Bangla Question-Answering System View projectNLP/ML projects View projectShomi KhanShahjalal University of Science and Technology2 PUBLICATIONS 4 CITATIONS SEE PROFILEKhadiza Tul KubraShahjalal University of Science and Technology1 PUBLICATION 4 CITATIONS SEE PROFILEMd Mahadi Hasan NahidShahjalal University of Science and Technology16 PUBLICATIONS 43 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Mahadi Hasan Nahid on 15 January 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/329972144_Improving_Answer_Extraction_For_Bangali_QA_System_Using_Anaphora-Cataphora_Resolution?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/329972144_Improving_Answer_Extraction_For_Bangali_QA_System_Using_Anaphora-Cataphora_Resolution?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Question-Answering-System?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/NLP-ML-projects?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shomi_Khan?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shomi_Khan?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Shomi_Khan?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khadiza_Kubra?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khadiza_Kubra?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khadiza_Kubra?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-c3cc3e95b6d3a163d637e7e423f20aee-XXX&enrichSource=Y292ZXJQYWdlOzMyOTk3MjE0NDtBUzo3MTUzODMzMjE0MDc0ODhAMTU0NzU3MjA4MzM5Mw%3D%3D&el=1_x_10&_esc=publicationCoverPdf978-1-5386-5229-9/18/$31.00©2018IEEE International Conference on Innovation in Engineering and Technology (ICIET) 27-29 December, 2018 Improving Answer Extraction For Bangali Q/A System Using Anaphora-Cataphora Resolution Shomi Khan Department of Electrical & Electronics Engineering Shahjalal University of Science & Technology nkskl6@gmail.com Khadiza Tul Kubra Department of Mathematics Shahjalal University of Science & Technology ktk.sust2015@gmail.com Md Mahadi Hasan Nahid Department of Computer Science & Engineering Shahjalal University of Science & Technology nahid-cse@sust.eduAbstract—Human Computer Interaction (HCI) is a field of study to interact between humans (user) and computers on the design of computer technology. Question Answering (QA) system is one of the parts of HCI and a process of Information Retrieval (IR) in Natural Language Processing (NLP). In this research, it is attempted for a Bangla Question Answering System with simple sentences and experimented the system for both Bangla and English language. And it is tried to perform with semantic and syntactical analysis. Furthermore, for Bangla, a word net is constructed to demonstrate the system process. Our proposed method is a model in which it is easy for users to get most possible exact answer to their question easily and reduces the complexity of using noun instead of pronoun for the requested answer with respect to the given question queries for Bangla. It improves better answer extraction than naive approach. Keywords—Question Answering (QA), Bangali Question Answering (QA), Information Retrieval (IR), Natural Language Processing (NLP), Semantic and Syntactical Analysis, Word Net, Human Computer Interaction (HCI), Anaphora, Cataphora. I. INTRODUCTION: This QA system now-a-days has become very demandable, smarter and challenging system. Users usually require for quick response with exact answers to their queries [1]. But accessing the exact answer to the respective queries from the web document is not an easy task [2]. And if the query is in Bangla, it becomes more challenging since there are few works in Bangla. QA system provides short answer to a natural language query for the corresponding question using either a pre-structured database or a collection of natural language web documents and presents only the requested information [1]. Natural language processing (NLP) with Information Retrieval (IR) is used by most of the QA system to search required questions [2]. NLP provides the computer understanding and manipulation of human language. It stands for the interaction between human and computer. In this research, it is demonstrated in a web document hierarchy and</s> |
<s>a word net for the exact answer prediction by semantic matching with anaphora-cataphora resolution. Word net is referred as a lexical database which contains words with their synonyms and relational words, noun, adjective, verbs with grouping [2]. There are few works done for Bangla QA system. In Bangla, there are some additional complexity to extract answer from the document. Sometimes it displays pronoun as the answer of the question, that leads the accuracy of the answer to less. In this research, it is experimented to reduce the complexity to extract exact answers from the dataset and replace the pronoun by the most suitable noun using word net. II. RELATED WORKS: There are lots of research works on English Question answering System, Question classification, Taxonomies and so on. But there is very few works on Bangla. Banerjee and Bandyopadhyay, 2012 has done a work on Bangla Question classification. They have studied work suitable lexical, syntactic and semantic features and Bengali interrogatives and has proposed single-layer taxonomy of nine coarse-grained classes and has achieved 87.63%accuracy of question classification in their work [3]. There are two approaches to classify questions, i. Rule based approach [3] [4] ii. Machine learning based approach [3] [4] Some researchers use some hybrid approaches by to combine the two approaches [4]. This combine approaches have never been used for Bangla Question Classification by any researchers. Li and Roth, 2004 and Lee et al., 2005 have proposed 50 and 62 fine grained classes for English and Chinese QC [3] [4]. Lexical, syntactical and semantic features (Loni, 2011) are the three categories of the features in QC [3]. A question in the QC task similar to document representation in vector space model is represented by Loni et al. (2011), i.e., a vector which is described by the words inside the question [4]. Thus, Q (question) can be represented as: Where, = frequency of term kin question , and N= total number of Terms [4]. Question taxonomies is the question categories set. Single-layer taxonomy for Bengali question type have been used by Banerjee and Bandyopadhyay, 2012 which has eight coarse-grained classes with no fine-grained classes[3] [4]. No other researches have been contributed so far for Bengali taxonomies [3]. Banerjee and Bandyopadhyay, 2012 have used three features, lexical, semantical and syntactical feature in QC Lexical features(fLex) are of wh-word, wh-word positions, wh-type, question length, end marker, word shape, Syntactical features(fSyn) are of POS tags, head word and Semantic features(fSem) are of related words, named entity [4]. III. Dataset To demonstrate our system, at first, a pair of simple affirmative sentences have been taken in which one contains a bag of noun and another pronoun corresponding to the noun of the first sentence. We have considered 50+ pair of sentences as dataset. IV. PROPOSED ALGORITHM: In our developed QA system, it is capable of finding exact answer of given questions from the document. It tokenize the question into words, pop the wh-type words. Then it finds the best match through the document for</s> |
<s>the question words. The flowchart of our system is shown in below: Fig. 1. Flow chart of our proposed QA system In this system, a big problem was faced. For example- 1st sentence: । 2nd sentence: । Question: According to our system, answer would be— “” which is not expected. It is needed to replace “ ” with “ ”. This is why a system is developed which can do it. Replacing the pronoun with its appropriate noun can be done by applying syntactic rules, semantic rules and reasoning type analysis. A system has been developed to do this which is a combination of syntactic and syntactic rules. At first, we’ll apply semantical rules. When it will fail to proceed we’ll apply syntactical rule then. i. Semantic rules: To reach our goal, it is needed to know the context or topic of that sentence. For this reason, some semantic rules are applied here to get the appropriate noun from first sentence. To get the context we used some tag word of and Bangla parts of speech. For example, The tag of “ ” is “ ” The tag of “ ” is “ ” The tag of “ ” is ” The tag of “ ” is “ ” The tag of “ ” is “ ” Now, 1st sentence: 2nd sentence: EEE । Now it can be seen that, three words of first sentence have “ ” tag and two words have “ ” tag. So, dominating tag is “ ”. So, the context of the first sentence is “ ”. Among the words of “ ” tag only “ ” is the proper noun. So, “ ” will be replaced by “ ”. Now, another example is, The tag of “ ” is “ ” The tag of “ ” is “ ” The tag of “ ” is “ ” 1st sentence: । 2nd sentence: । From this example, it can be seen that three words have “ ” tag and no other tags are found here. So, the context of the first sentence is “ ”. “ ” will be replaced by “ ”. Input document Input Question Question to Sentence Tokenize Sentence to word Tokenize Implement our proposed method in Document Match the question with document to find the answer using anaphora cataphora Display the answer Now, The tag of “ ” is “ ” The tag of “ ” is “ ” The tag of “ ” is “ ” The tag of “ ” is “ ” 1st sentence: । 2nd sentence: । Here, it can be seen that, there is 2 word that contains tag “ ” and other 2 words contains tag “ ”. Now it will be done a priority scoring by applying syntactic rules. TABLE I. “ জলার” TAG: Words Score Reason 2 “ ” is a single word, so upper priority. 1 “ ” is a word with the combination of two words “ ” and “ ”.</s> |
<s>As it contains “ , it gets lower priority. Details can be found in Syntactical rules. Total 3 TABLE II. “ ” TAG: Words Score Reason 2 Single word 2 Single word Total 4 We can see that, the total score or priority of “ ” tag is 3 and “ ” tag is 4. So, we’ll consider the upper priority. Here, the tag of “ ” is the upper priority and its word tag is “ ”. So, we’ll take “ ” as our most possible result. The tag of “ ” is “ ”. The tag of “ ” is “ ”. The tag of “ ” is “ ” 1st sentence: ।2nd sentence: ।Here, “ ” and “ ” are noun in the first sentence. They both are single words. In the second word, “ ” tag denotes for both “ ” and “ ”, since their priorities are equal. From another example, we can see, The tag of “ ”is “ ”. The tag of “ ” is “ ”. The tag of “ ”is “ ”. The tag of “ ” is “ ”. 1st sentence: ।2nd sentence: ।Till now, it can be seen that, there is no tag in second sentence. But in this example, we can find “সু া ” tag in the second sentence. It can be seen that, there are two words with “ ” tag and one is “ ” tag. In the second sentence we can find “ ”. It will give the direction to the “ ” tag. So, we’ll consider “ ” tag as expected answer, since in the first sentence there is “ ” only tag is “ ”. Fig. 2. Tag tree This is the tag tree which is used in wordnet. Leaf nodes are proper noun. Internal nodes are their tag word. The internal are itself their own tag word. We will go upper tag until we don`t find max tag words. Now, the total overview of the semantic rule is given below as a flow chart: Fig. 2. Flow chart of the overview of the semantic rule ii. Syntactical Rules: In our system, the pronoun is replaced with its corresponding noun. If only noun in the first sentence is got, then it will be directly replaced with the pronoun of 2nd sentence. For example, 1st sentence: । 2ndsentence । Here, “ ” is the only subject of first sentence. So, “ স” from the second sentence will be replaced by “ ” from the first sentence. But, sometimes there is more than one nouns. For example, 1st sentence: ।2ndsentence: । Here, “ ” is the noun who is acting and “ ” is the noun with “ ” and related to noun “ ”. This is why, we replaced “ ” with “ ”. Another example, For example, 1st sentence: ।2ndsentence: । Here, “ ” is the pronoun which is in plural form. So, indicates “ ”. “ ” is the pronoun which</s> |
<s>is in singular form. So, it indicates “ ”. For example, 1st sentence: ।2nd sentence: । Here, “ ” and “ ” are two nouns from the first sentence and “ ” and “ ” are pronouns from second sentence. There is a “ ” with “ ”. So, in this sentence, “ ” is somehow connected with “ ”. “ ” is the noun who is taking action. This is why it is taken “ ” as lower priority noun and replaced it with “ ” of second sentence. “ ” is the lower priority pronoun than “ ” for the same reason. “ ” from the second sentence is replaced by “ ”. Similarly, if any noun with “ ”, “ ”, “ ” etc. it was taken as lower priority noun and replaced it with lower priority pronoun “ ”, “ ” etc. 1st sentence: ।2nd sentence: । Here, “ ” is an object type pronoun from the second sentence. So, it will search an object type noun from the second sentence. “ ” and “ ” both are noun in the first sentence. Here “ ” is a subject type noun and “ ” is an object type noun. So “ ” will replace “ ”. Again, 1st sentence: ।2nd sentence: ।There are two nouns in the first sentence and two pronouns in the second sentence. Since, the only place type noun is “ ” and object type noun is “ ”, the place type pronoun “ ” of the second sentence will be replaced by the noun “ ” and the object type pronoun. “ ” will be replaced by the noun “ ”. V. RESULT ANALYSIS: It has been taken fifty documents with ten question queries for each document in this study. But here is shown only ten examples in the table as sample. TABLE III. RESULT SCROING TABLE No Document QuestioNaïve ApproacOur Proposed System Scor(1/1) (1/1) (1/1) । । (0/2) (2/2) । । (2/2) (1/1) । । (0/2) (1/1) (0/2) The scoring has been demonstrated in the table. Each problem has been scored due to the number of pronouns found according to the required number. Three problems have been demonstrated, that fails to find the required nouns and seven problems that finds the nouns and also extract the single words for exact answer. When it fails, it is scored 0 out of 1. When there are nouns for the pronoun it is scored 1 out of 1. Accuracy= (2) = × 100% =60% TABLE IV. ACCRACY MEASURING TABLE No of observations Accuracy given by naïve approach Accuracy given by our proposed system 1. 60% 100% 2. 50% 60% 3. 60% 60% 4. 80% 100% 5. 60% 70% 6. 50% 60% 7. 60% 70% 8. 60% 60% 9. 70% 80% 10. 60% 80% Total in average 61% 74% Here, in observation 4, 8, 10 are based on reasoning fact. There are many types of reasoning facts like, induction, deduction, counting etc. Dynamic Memory Allocation</s> |
<s>(DMA) can do this type of processing. It also needs some basic knowledge. Just like, 1) If someone do something bad, he will feel guilt later. 2) If a person thanks someone, he will welcome him. 3) Fast food is dangerous for health. Etc. Semantic Memory (SM) can be used to save and use the basic knowledge or static knowledge. Actually, to improve our accuracy it is needed to use the combination of both DMA and SM. It wil be done by us. But in our previous system, it fails to show satisfactory results. Which is comparatively immobile than our developed system. VI. CONCLUSION: In this research a usable QA system have been implemented. During this implementation, some challenges are faced, for example, our system only works for simple sentences. It will be puzzled and give incorrect answer if the sentence is very complex. The Bangla word dataset is not so much enriched. And for Bangla, the contribution is so little. It needs a lot of time and effort. In future, it can be possible ensuring a better result by using better methods, techniques and resources. In this system 60% accuracy have been found. A better accuracy can be found by further processing in future. References [1] Lende, Sweta P., and M. M. Raghuwanshi. "Question answering system on education acts using NLP techniques." Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), World Conference on. IEEE, 2016. [2] Jayalakshmi, S., and Ananthi Sheshasaayee. "Automated question answering system using ontology and semantic role." Innovative Mechanisms for Industry Applications (ICIMIA), 2017 International Conference on. IEEE, 2017. [3] Banerjee, Somnath, and Sivaji Bandyopadhyay. "Ensemble approach for fine-grained question classification in bengali." 27th Pacific Asia Conference on Language, Information, and Computation. 2013. [4] Banerjee, Somnath, and Sivaji Bandyopadhyay. "An Empirical Study of Combing Multiple Models in Bengali Question Classification." Proceedings of the Sixth International Joint Conference on Natural Language Processing. 2013. [5] Iyyer, Mohit, et al. "A neural network for factoid question answering over paragraphs." Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014. [6] Yih, Wen-tau, et al. "Question answering using enhanced lexical semantic models." Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. 2013. [7] Andreas, Jacob, et al. "Learning to compose neural networks for question answering." arXiv preprint arXiv:1601.01705(2016). [8] Cooper, Richard J., and Stefan M. Ruger. "A simple question answering system." TREC. 2000. [9] Moldovan, Dan, et al. "Lasso: A tool for surfing the answer net." TREC. Vol. 8. 1999. [10] Tellex, Stefanie, et al. "Quantitative evaluation of passage retrieval algorithms for question answering." Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval. ACM, 2003. [11] Grishman, Ralph. "Information extraction: Techniques and challenges." International Summer School on Information Extraction. Springer, Berlin, Heidelberg, 1997. [12] Hovy, Eduard, et al. "Question answering in webclopedia." TREC. Vol. 52. 2000. [13] Senapati, Apurbalal, and Utpal Garain. "Guitar-based pronominal anaphora resolution in</s> |
<s>bengali." Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Vol. 2. 2013. [14] Tazakka, Tazbeea, Md Asifuzzaman, and Sabir Ismail. "Anaphora Resolution in Bangla Language." International Journal of Computer Applications 154.9 (2016). [15] Sanjay Chatterji, Arnab Dhar, Biswanath Barik, Sudeshna Sarkar and Anupam Basu, “Anaphora resolution for bengali, hindi, and tamil using random tree algorithm in weka.” In Proceedings of the ICON-2011, 2011. [16] Sikdar, Utpal Kumar, Asif Ekbal, Sriparna Saha, Olga Uryupina, and Massimo Poesio. “Anaphora Resolution for Bengali: An Experiment with Domain Adaptation”, Computación y Sistemas 17, no. 2, 2013 : 137-146. [17] Fedele, Emily, and Elsi Kaiser. "Looking back and looking forward: Anaphora and cataphora in Italian." University of Pennsylvania Working Papers in Linguistics 20.1 (2014). [18] Bharadwaj, Rohit G., et al. "A Naïve Approach for Monolingual Question Answering." CLEF (Working Notes). 2009. View publication statsView publication statshttps://www.researchgate.net/publication/329972144</s> |
<s>Bangla Intelligence Question Answering System Based on Mathematics and Statistics2019 22nd International Conference on Computer and Information Technology (ICCIT), 18-20 December 2019 978-1-7281-5842-6/19/$31.00 ©2019 IEEE Bangla Intelligence Question Answering System Based on Mathematics and Statistics Md. Kowsher Dept. of Applied Mathematics Noakhali Science and Technology University, Noakhali-3814,Bangladesh. ga.kowsher@gmail.com M M Mahabubur Rahman Dept. of CSTE Noakhali Science and Technology University Noakhali-3814, Bangladesh toufikrahman098@gmail.com Nusrat Jahan Prottasha Dept of CSE Daffodil International University Dhaka,Bangladesh nuaratjahan1234561234@gmail.com Sk Shohorab Ahmed Dept. of Information and Communication Engineering, University of Rajshai, Rajshai-6205, Bangladesh shohorab.ahmed.it@gmail.com Abstract— The Bangla Informative Question Answering System (BIQAS) is a significant Machine Learning (ML) technique that helps a user to trace relevant information by Bengali Natural Language Processing (BNLP). In this research paper, we have applied three mathematical and statistical procedures for BIQAS based on question answering data. These procedures are cosine similarity, Jaccard similarity, and Naive Bayes algorithm. The cosine similarity has interacted with dimension reduction technique SVD on user questions and questions answering data in order to reduce the space and time complexity. These procedures of this research are separated into two parts: pre-processing data and establishment of a relationship between user’s questions and contained informative questions. We have got 93.22% accurate answer by using cosine similarity, 84.64% by Jaccard similarity and 91.31% by Naïve Bayes algorithm. Keywords— BIQAS, BNLP, Information retrieval, Machine Learning, Mathematics, Statistics. I. INTRODUCTION The present time is the era of information. The information is increasing day by day and the world is being more informative, so the virtual information retrieval system keeps its significant that is artificial question answering system. Users often have specific questions in mind, that‟s why they want to obtain replies. They would like the replies to be easy and precise, and they always prefer to express the questions in their native language without being restricted to a specific query language, query formation rules, or even a specific knowledge domain. The new approach taken to matching the user needs is to carry out actual analysis of the question from a linguistic point of view and to attempt to understand what the user really means. In Bangla NLP, the BIQAS has been formed by three main modules: data collection, information and user questions processing, and making the relationship between them. Different techniques are held for the sake of pre-processing of BNLP, e.g., anaphora, cleaning special character and punctuation, stop words removing, verb processing, lemmatization, and synonyms word‟s processing. In order to obtain the perfect anaphora resolution, the famous Hobbs‟ algorithm is imparted. In lemmatization action, we have described three procedures with a strong system in the lowest time and space complexity. To reduce the dimension of a question and information, we have used the SVD that also minimizes the execution time of program. It also helps understand and calculation in a simple way. The TF-IDF is used to find out the influence of words in documents and constructed the perfect vector. In order to generate reply of users‟ questions, we have used</s> |
<s>Cosine similarity, Jaccard similarity and Naïve-Bayes. These methods aid to establish the relations between users‟ questions and information. The contributions are summarized as follows: • We have introduced mathematical and statistical procedures for BIQAS for information retrieval. • For the pre-processing of data, we have applied Hobbs‟ algorithm, Edit Distance, Trie and DBSRA. • We have used the SVD with cosine similarity to reduce time and space complexity as well as instant answering of questions. • In order to generate answer of BIQAS Cosine similarly, Jaccard similarity, and Naïve-Bayes algorithms are used. • To make easy of pre-processing steps, we developed BLTK module that contains all of the pre-processing steps and mathematical techniques. For the easiest explanation, we consider two users questions as example in every term of this paper, these are User Question-1: ড় হ ? [Where is the largest female hall situated in Bangladesh?] User Question-2: ? [What‟s the name of its auditorium?] II. RELATED STUDY Welbl et al. formed the WikiHop dataset which contained questions that needs more than one Wikipedia document to answer [1]. Asiaee et al. devised an ontology-based QA system, Onto NLQA. It has five primary parts such as Linguistic preprocessing, Entity recognition, Ontology element matching, Semantic association discovery and Query formulation and answer retrieval [2]. Xie et al. proposed a question answering system which was based on Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply. ontology. From the course of “Natural Language Processing”, the ontology data were extracted [3]. Kowsher, Md, et al. proposed a Bangla chatbot based on Bangla language processing. This framework follows three basic steps: question processing, information retrieval (from the web) and answer extraction [4]. Lee et al. proposed an ontology-based QA system. They defined sixteen types of queries in this system. Then the corresponding inferring approach was defined and implemented for each query [5]. Lopez et al. devised an ontology-based question answering system, AquaLog. The input questions are processed and categorized into 23 groups. If the input question falls within one of these groups, the system will process it accurately [6]. Raj proposed a QA system for a specific domain based on the ontological information. This system has four main parts: the question analysis, which analysis the user‟s question [7]. ROBERT F. SIMMONS et al. devised a Natural Language Question Answering Systems which mainly focused on syntactic, semantic, and logical analysis of English strings [8]. Boris Katz invented the world's first Web-based question answering system called START. It was created by InfoLab Group at the MIT Computer Science and Artificial Intelligence Laboratory which aims to provide with "just the right information," in its place of providing a number of hits [9]. Moldovan et al. utilized syntax-based natural language understanding technique and question classification technique to get better accuracy in the question answering task named TREC[10] and Kowsher, Md, et al. discovered an information-based Bangla Automatic Question Answering System which can provide informative knowledge from users asking</s> |
<s>[11]. Cai Dongfeng and Cui Huan discovered a web-based Chinese Automatic Question Answering System which uses Google Web services API [12]. Liu Hongshen, Qin Feng, Chen Xiaoping, Tao Tao, et al. proposed a teaching mode software which can keep the attendance of the students and also can keep the answer of any student [13]. Jeon et al. evaluated the question retrieval action for four famous retrieval methods, the vector space model, the Okapi model, the language model, and the translation model [14].Wei Wang, Baichuan Li, Irwin King proposed an improved question retrieval model that can detect users‟ intentions connected with the former question retrieval [15]. Unlike these works, we have introduced intelligence question answering system of Bangla with the help of mathematics and statistics. III. BACKGROUND STUDY A. Lemmatization Lemmatization is a simplification process for finding out the extract root-word in natural language understanding. Lemmatization has been used in a variety of real world applications such as text mining, Chat bot, questions and answering etc. In this research, we have used an effective lemmatization algorithm for the BNLP. At first we have slightly modified the Trie algorithm based on prefixes. After that we have used a mapping based new algorithm titled as Dictionary-Based Search by Removing Affix (DBSRA). B. TF-IDF TF-IDF is the abbreviation of the Term Frequency-Inverse Document Frequency. It is a numerical technique to find out the importance of a word to a sentence and is mathematical- statistically significant. Basically, TF determines the frequency of a term in a sentence and IDF determines the importance of a word to its documents. Mathematically, C. Cosine Similarity Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine of two non-zero vectors can be derived as: D. Jaccard Similarity The Jaccard index also called the Jaccard similarity is a statistical method for determining the similarity of distinct sample sets. If P and Q be two sets the Jaccard index formula is given E. Naive-Bayesian Theory The Bayesian hypothesis is the most common application of Bayes' theorem that is a statistical hypothesis. Testing of the drug, productions and materials as well as computing the entire output of a company based on machines are the one kinds of example. This theorem states mathematically in the following equation: M(A│B) = (M(B│A) M(A))/(M(B)) where A and B are events and M(B) ≠0 and M(A│B) is a conditional probability of event A happening given that B is true. Similarly, M(B│A) is also a conditional probability happening given that A is true. IV. PROPOSED WORK In this research paper, we have presented a Bengali Intelligence Question Answering System (BIQAS) based on mathematics and statistics using Bengali Natural Language Processing (BNLP). The procedure is isolated in three parts that are: informative documents collection, pre-processing data and relationships between information and user questions. Corpora have been attached for the pre-processing inserted data. The action of Cosine Similarity, Jaccard similarity, and Naïve Bayes</s> |
<s>algorithms are urged to obtain the relationship between the questions and answers. But Cosine Similarity deals with vectors. In this case, the documents and questions transmit to vectors using the TF-IDF model. In order to minimize the execution time and space complexity, we have used SVD techniques. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply. Fig. 1. Proposed Work V. PRE-PROCESSING In order to run algorithm, the dataset must be pre-processed. There are several pre-processing techniques are held in BIQAS. The first one is Anaphora which refers to a word that is used earlier in a sentence to avoid repetition, e. g., the pronouns. It requires a successful identification and resolution of NLP. In the proposed BIQAS, we have described a review of work done in the field of anaphora resolution which has an influence on pronouns, mainly personal pronoun using The Hobbs‟ algorithm. Table.1: Workflow of Anaphora Resolution. Here, „ ‟(It) is the pronoun of the name of „ন োবিপ্রবি‟(NSTU). So we used „ ‟ instead of „ন োবিপ্রবি‟ from the previous example as Anaphora Resolution. Cleaning word refers to remove an unwanted character which does not have any sentiment on informative data; for example: colon, semicolon, comma, question mark, exclamation point, and other punctuations. Stop words refer to the words that do not have any influence on documents or sentences. Instances of Bengali stop words are (and), (where), (or), (to), (with), etc. Since our BIQAS is an algorithm based data, the stop words need to be dismissed. Here we have removed the Bengali stop words „ (most)‟, „ (where)‟ and „ (what)‟ from the previous questions. In BNLP, there are few verbs that cannot be lemmatized by any system because of ignoring all kinds of lemmatization algorithms. For example, ( , went) and ( , going) generate from the root word ( , go). There are no relations of character between ( , went) and ( , go). So processing with algorithms to these words is not a good choice. That is why these types of verbs are converted into their root verbs for easily accessing as lemmatization. In our project paper, we used different kinds of lemmatization techniques like DBSRA and Trie. Sometimes there are few words in the Bangla language which do not work in Trie but work in DBSRA or work in Trie but not in DBSRA. So we have used Levenshtein distance to find out the best lemma word between DBSRA and trie. Lemmatization algorithms are not a good choice for unknown words. Here unknown words refer to the name of a place, person or name of anything. Levenshtein distance assists to determine which word is known or unknown. We count the probability of edit between lemma and word (before lemmatization). If the probability P (lemma |word) is greater than 50% [P (lemma |word) > 50%], then it is counted as unknown words. Fig. 2. Workflow of lemmtiazation In order to process</s> |
<s>the unknown words, we have established a corpus of the suffix of Bengali language; for instances: (te), (che), (yer), etc. The longest common suffix has been removed from the last position of an unknown word. Thus we obtain the lemma or root of an unknown word. Synonym words indicate the exact or nearly the same meaning of different words. Users ask questions having words that are not available in information data but its meanings do. In this sense, BIQAS may fail to answer correctly. So synonym words processing has a significant Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply. action in the BIQAS as well as in Natural Language Understanding (NLU). Here „ হ (large)‟ is the synonym of „ ড় (big)‟ and „ হ’ (large) is considered as a common word. After preprocessing the question will be User Question-1: হ হ [Where is the largest female hall situated in Bangladesh?] User Question-2: [What‟s the name of its auditorium?] VI. ESTABLISHMENT OF RELATIONSHIP A. Cosine Similarity In order to represent the words of user questions or informative questions as numerically, we have used the Vectorization method TF-IDF model. For the simplification for our task, we have taken two examples from the considered corpus. The TF-IDF value of the informative questions and user question have been shown in the table Table 2: Cosine Similarity Calculation Let us set the term weights and construct the term-document matrix and query matrix: Now we will use SVD in matrix and will find and matrices, where Row of V holds the eigenvector value. These are the coordinates of individual document vectors. Hence To find the new query vector co-ordinates, we have Now for first question: We can see that . So user question1 can be found in informative question2, i.e. . The user question-2 can be found in informative question1, i.e. 𝐼𝑄1. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply. B. Jaccard Similarity After pre-processing the questions and data can be revealed as sets Now, let an example using set notation and Venn-diagram P= {িোাংলোদদশ, িড়, আিোবিক, ছোত্রী, হল, অিবিত} Q= {িোাংলোদদশ, বিশ্ববিদযোলয়, িৃহৎ, ছোত্রী, হল, থোকো} Fig.3: Jaccard Similarity C. Naive Bayes Experiments After pre-processing data, we have converted every word to term-document matrix then calculated probability Table 3: Naïve Byes Calculation VII. EXPERIMENTS We have described a range of experiments to measure our proposed model the mathematical and statistical procedures for BIQAS. In this section, first, we present the questions that we target to reply to the experiments and describe the experimental setup. Then, we discuss the performance and result of our propounded work. A. Corpus For the implication of BIQAS, we describe mainly five types of corpus. In the first corpus, there are 28,324 Bengali root words. The main aim of this corpus is to lemmatize Bengali words. The second one that contains 382 Bengali stop words is</s> |
<s>to remove the stop words from the inserted documents and questions. We have compiled 74 topics as informative documents like as hall information, department information, teacher information, library, NSTU nature, bus schedules etc. of Noakhali Science and Technology University (NSTU) that is our third corpus as questions with its relevant answer of document‟s information. In this work, we have originated 3127 questions from our inserted documents as our fourth corpus. Every question contains its corresponding answer. To test our data, we have created 2852 questions from relevant 74 topics. Fig. 4. Questions and Answers(Data) B. Experimental Setup We implemented our propounded model and in Anaconda distribution with Python 3.7 programming language and executed them on a Windows 10 PC with an Intel Core i7, CPU (3.20GHz) and 8GB memory. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply. Python is a high-level object-oriented language (OOP) which is suitable for scientific examination and tools development. We have used the Anaconda as the apportionment of Python. Anaconda creates the best stage for open source data science which is powered by Python. In order to preprocess, we have used the BLTK tool [3, 11] that is the Bengali Language Toolkit. The BLTK also provides the TF-IDF, SVD, cosine similarity, Jaccard similarity, and Naive Bayes algorithm. C.Result and Analysis To test our proposed BIQAS, We have created 2852 questions as testing data from selected 74 topics of Noakhali Science and Technology University (NSTU) and we obtained 93.22% accuracy in cosine Similarity, 82.64% in Jaccard Similarity and 91.34% in Naive Bayes classifier. D. Comparison between English Chatbot and BIQAS Since there are no chatbot in Bengali like BIQAS, so there is any other Bengali chatbot to compare with our work. Our work is at present the state of the art for the Bengali Intelligence Bot. So we compare BIQAS with two English chatbots which are Neural Conversational Machine (NCM) and Cleverbot. What‟s your mobile number? ( ?) Mitsuku : That information is confidential. BIQAS: । How old are you?( ?) Mitsuku: I am 18 years old. BIQAS: খ । What is your address?( ?) Mitsuku: I am in Leeds. BIQAS: What‟s your mobile number?( ?) Mitsuku: That information is confidential. BIQAS: । How old are you?( ?) Mitsuku: I am 18 years old. BIQAS: খ । What is your address?( ?) Mitsuku: I am in Leeds. BIQAS: VIII. CONCLUSION & FUTURE WORKS The main challenge of this project paper is to implement a Bengali intelligence bot for information retrieval. We have shown the theoretical and experimental methodology of our proposed work. In this scientific paper, we have described three procedures using machine learning, mathematics, and statistics. To establish the full methodology, we have followed some procedures like pre-processing, time and space reduction, and established the relation between information and questions. In the future, the proposed BIQAS system can be enabled for the purpose of educations, industry, business and personal tasks with voice replying system.</s> |
<s>An advance it can be shaped with the assist of Deep Learning algorithms such as Recurrent Neural Network (RNN) by processing the BNLP. REFERENCES [1] Clark, Peter, et al. "Think you have solved question answering? try arc, the ai2 reasoning challenge." arXiv preprint arXiv:1803.05457 (2018). [2] Asiaee, Amir Hosein. A framework for ontology-based question answering with application to parasite data. Diss. University of Georgia, Athens, GA, USA, 2013. [3] Kowsher, Md, et al. "Doly: Bengali Chatbot for Bengali Education." 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT). IEEE, 2019. [4] Shah, Urvi, et al. "Information retrieval on the semantic web." Proceedings of the eleventh international conference on Information and knowledge management. ACM, 2002. [5] Abdi, Asad, Norisma Idris, and Zahrah Ahmad. "QAPD: an ontology-based question answering system in the physics domain." Soft Computing 22.1 (2018): 213-230. [6] Lopez, Vanessa, et al. "AquaLog: An ontology-driven question answering system for organizational semantic intranets." Web semantics: science, services and agents on the world wide web 5.2 (2007): 72-105. [7] Raj, P. C. "Architecture of an ontology-based domain-specific natural language question answering system." arXiv preprint arXiv:1311.3175 (2013). [8] Simmons, Robert F. "Natural language question-answering systems: 1969." Communications of the ACM 13.1 (1970): 15-30.] [9] Yu, Zheng-Tao, et al. "Answer extracting for chinese questionanswering system based on latent semantic analysis." CHINESE JOURNAL OF COMPUTERS-CHINESE EDITION29.10 (2006). [10] Whittaker, Edward, Sadaoki Furui, and Dietrich Klakow. "A statistical classification approach to question answering using web data." 2005 International Conference on Cyberworlds (CW'05). IEEE, 2005. 11. [11] Kowsher, Md, Imran Hossen, and SkShohorab Ahmed. "Bengali Information Retrieval System (BIRS)." International Journal on Natural Language Computing (IJNLC) 8.5 (2019). [12] Dongfeng, Cai, et al. "A Web-based Chinese automatic question answering system." The Fourth International Conference onComputer and Information Technology, 2004. CIT'04.. IEEE, 2004. [13] Stalin, Shalini, Rajeev Pandey, and Raju Barskar. "Web based application for hindi question answering system." International Journal of Electronics and Computer Science Engineering 2.1 (2012): 72-78. [14] Jeon, Jiwoon, W. Bruce Croft, and Joon Ho Lee. "Finding semantically similar questions based on their answers." Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2005. [15] Wang, Wei, Baichuan Li, and Irwin King. "Improving question retrieval in community question answering with label ranking." The 2011 International Joint Conference on Neural Networks. IEEE, 2011. Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on May 10,2020 at 12:55:14 UTC from IEEE Xplore. Restrictions apply.</s> |
<s>Design and Development of Question Answering System in Bangla Language from Multiple Documents1st International Conference on Advances in Science, Engineering and Robotics Technology 2019 (ICASERT 2019) 978-1-7281-3445-1/19/$31.00 ©2019 IEEE Design and Development of Question Answering System in Bangla Language from Multiple Documents 1Samina Tasnia Islam Computer Science and Engineering Department Military Institute of Science and Technology Dhaka, Bangladeh saminatasnia113@gmail.com 2Mohammad Nurul Huda Computer Science and Engineering Department United International University Dhaka, Bangladeh mnh@cse.uiu.ac.bd Abstract – The paper has presented about the experiment of design and development of automatic question answering system for Bangla language. The purpose of proposed answering system is to provide answer based on the keyword, lexical and semantic feature of a question. User gives a question for which answer has to be found from multiple documents. For measurement (time and quantity) related question the system gives specific answer otherwise the system retrieves relevant answers. Keywords – Question answering; keyword extraction; stemming; information retrieval; sentence ranking. I. INTRODUCTION In Question Answering (QA) system a user finds relevant answer of a question from unstructured document collection. QA is similar with Information Retrieval (IR) as minimum amount of information is retrieved which is enough to fulfill user demands [1]. Nowadays we need a system which will receive a question in natural language from user and find answers form given documents and provide relevant answers quickly to user. This study designed a QA system in NLP for Bangla language that allows a user to give question and provides relevant answers from a single or multiple documents. The originality of this paper is that the proposed system identifies the question type and for measurement (time and quantity) related question it provides relevant specific answer otherwise it retrieves relevant answers. In this paper Section-II discusses the problems of QA system in Bangla language, section-III is about the implementation of the proposed system, section-IV analyses the evaluation of the system, section-V is the conclusion and finally references are given in section-VI. II. PROBLEMS OF QUESTION ANSWERING SYSTEM IN BANGLA LANGUAGE Question answering system in Bangla language faces many problems. Problems of it are following: 1) Automatic QA from unstructured documents is a difficult task as there is a probability that the source text may contain only one answer to any user’s question [2]. 2) Mapping questions to answers between question strings to answer strings using lexical, syntactic or semantic relationship is another difficult task [2]. 3) The greater the answer redundancy in the text document, the more possibility to retrieve an answer in a simple relation to the user’s question. Otherwise, we will need to solve the above issues facing NLP systems [2]. 4) In Bangla language there is another challenge to identify the keyword or headword from the question as there is no specific rules in which place the ‘wh’ word of the question will be appeared. III. IMPLEMENTATION The proposed system can be designed by merging the contents of all the documents, removing stop words, stemming of question and text documents, keyword</s> |
<s>extraction from question, N-grams formation from keywords for approximate matching, retrieving n-best answers and generating specific answers by question type and evaluation of performance and correctness. Details of the steps are following: A. Removing Stop Words In the proposed system necessary intelligence about conjunctions, pronouns, verbs and also inexhaustible words have to be provided for removing these stop words from the question as well as from the text documents. B. Stemming Intelligence about suffixes is also needed for stemming the words of the given question as well as the text document. Both the training sets of question as well as the documents have to be stemmed to find out the morphological stem of the words for approximate matching and retrieving information from source documents. C. Keyword Extraction and N-grams formation from Keywords for Approximate Matching: Keywords or headwords from the question have to be generated. In the proposed system one of the statistical approaches that is the word intermediate distance vector and its mean value was used to extract keyword from sentence [3]. N-grams have to be formed for effective approximate matching. Keywords generated from questions will be used to form n-grams (unigram/bigram/trigram), thus allowing the n-grams to be compared with other sequences for retrieving relevant information from the source text. Authorized licensed use limited to: Auckland University of Technology. Downloaded on June 04,2020 at 08:04:46 UTC from IEEE Xplore. Restrictions apply. Table 1: Keyword Extraction Process. Here the keywords/Headwords are the root of the words. Question Keywords বাাংলাদেদে প্রথম কম্পিউটার কদব আদে? Array ( [0] => প্থম [1] => কম্পিউটা [2] => আে ) অ্যাবাকাে কদব আববষৃ্কত হয়? Array ( [0] => অ্যাবাকাে [1] => আববষৃ্কত ) জন ননবপয়ার (John Napier) এর অ্বি বক? Array ( [0] => ননবপয়া [1] => (John [2] => Napier) [3] => এ [4] => অ্বি [5] => ) গটফ্রাইড ভন বলববনজ বকভাদব যাবিক কযালকুদলটর আববষ্কার কদরন? Array ( [0] => গটফ্ইড [1] => যান্ত্ব ক [2] => কযালকুদলট [3] => আববষ্কা [4] => নকন ) বরদকাবনাং যি বক? Array ( [0] => যন্ত্ ) বডফাদরন্স ইন্জিন বক? Array ( [0] => ইন্জিন ) In Table 1, all the questions are set from a document related with computer collected from Wikipedia (Bangla). Here it’s shown that keywords are extracted from the question. Here the keywords are stemmed to find out the morphological root of the word. D. Lexical and Semantic Features: In Bangla question “wh-word” is a vital lexical feature. An important role is played by the end marker. If the end marker is “|” then the given question is definition type [4]. In the proposed system, measurement unit is used as semantic feature. According to interrogative (wh-type) type and semantic feature answer will be retrieved. Here, if the question type is time related কদব (kəbɛ)/ কখন (kəkhən) and quantity related কত (kət) then proposed system will give specific answer. A document related with “Cox’s Bazar” which is one of the tourist spot of Bangladesh and another document related with “Computer” have been collected</s> |
<s>from wikipedia(Bangla) and several questions given below have been set from these documents. Question 1: “কক্সবাজার থানা প্রথম কদব প্রবতষ্ঠিত হয়? (When Cox’s Bazar Thana has been first established?)” Answer 1: “১৮৫৪ োল” (“1854 year”) Answer 2: “োল এবাং নপৌেভা” (“Year and Municipality”) Question 2: “কম্পিউটাদর প্রথম বাাংলা নলখা েম্ভব হয় কখন? (When Bangla writing was possible in Computer first?)” Answer 1: “১৯৮৭ োল” (“1987 Year”) Question 3: “বাাংলাদেদে প্রথম কম্পিউটার আদে কত োদল ? (When Computer has first come in Bangladesh?)” Answer 1: “১৯৬৪ োল” (“1964 Year”) Answer 2: “১৯৭১ োল” (“1971 Year”) Answer 3: “১৯৮১ োল” (“1981 Year”) In the first question the first answer is correct; the second answer here matches with the keywords of the question. In question 2, here only one answer is retrieved which is correct and in question 3 first answer is correct and other answers matches with the keywords of the question. In these answers every word is the root of the words itself. E. Ranking the retrieved sentences Retrieved sentences will be ranked by Textual Entailment Module (TE) [5] and best ranked retrieved information will be considered as answer. IV. EVALUATION Evaluation of the proposed system can be carried out by using the following formulas. Precision =Relevant Items RetreivedRetreived Items… … eq(1) Recall =Relevant Items ReteivedRelevant Items… … … eq(2) F Score = 2 ∗(Precision ∗ Recall)Precision + Recall… … … eq(3) For evaluation purpose several documents have been selected from Wikipedia. Approximate 500 questions have been set to test the performance and correctness. At first for every single question relevant items have been retrieved, retrieved items and total no of relevant answer for the question given by user have been identified. Thus for every single question precision and recall have been calculated using eq(1) and eq(2) respectively. Then average precision and average recall have been calculated. With the value of average precision and average recall F Score/ F Measure has been calculated using eq(3). After testing approximate 500 question average precision (0.35) average recall (0.65) have been obtained and the F score (0.45) is calculated based on average precision and averave recall. F score/ F measure reveals the system performance. The higher F score indicates that the system is more better. Authorized licensed use limited to: Auckland University of Technology. Downloaded on June 04,2020 at 08:04:46 UTC from IEEE Xplore. Restrictions apply. Table 2: Process for Calculating Precision and Recall Questions Relevant Answers Retreived Retreived Answers Relevant Answers Precision Recall কম্পিউটার েদের উতপবি বকভাদব? 1 2 1 0.5 1 যাবিক কযালকুদলটর েব বপ্রথম কদব আববষৃ্কত হয়? 1 1 1 1 1 কম্পিউটার েদের অ্থ ব বক? 0 1 1 0 0 বাাংলাদেদে প্রথম কম্পিউটার কদব আদে? 1 1 1 1 1 অ্যাবাকাে কদব ততবর হয়? 0 6 2 0 0 জন ননবপয়ার (John Napier) এর অ্বি বক? 1 2 2 0.5 0.5 গটফ্রাইড ভন বলববনজ বকভাদব যাবিক কযালকুদলটর আববষ্কার কদরন? 1 2 1 0.5 1 বরদকাবনাং যি বক? 0 9 2 0 0 যাবিক কযালকুদলটর েব বপ্রথম</s> |
<s>কদব আববষৃ্কত হয়? 1 2 1 0.5 1 গণকযি বক? 1 1 1 1 1 মাইদরাপ্রদেের উদ্ভাবক নকান প্রবতিান? 0 5 1 0 0 কম্পিউটাদর প্রথম বাাংলা নলখা েম্ভব হয় কখন? 1 2 1 0.5 1 বাাংলা ওয়াডবপ্রদেবোং েফটওয়যার উদ্ভাবন কদর কারা? 1 1 1 1 1 মাইদরােফট উইদডাজ’ এর েদে বযবহাদরর জনয ইন্টারদফে ‘ববজয়’ কদব উদ্ভাববত হয়? 1 2 1 0.5 1 Average: 0.5 0.69 In Table 2 process for calculating precision and recall has been shown. The first question in Table 2, this system has retreived two answers so here retrieved items=2. Among the two answers only one is relavent with the question and in the given document there is only one relevant answer as a result here relevant items retrieved=1 and relevant items=1. So for the first question using eq(1) and eq(2) precision (1/2=0.50) and recall (1/1=1.00) have been calculate. Like this way for every single question precision and recall have been calculated. Using the value of precision and recall average precision and average recall were calculated. V. CONCLUSION This paper has discussed a QA system from multiple documents. This study concludes the following: This approach is able to provide relavent answers of a user question for Bangla Language from multiple documents. Keywords of a question can be find out in this approach. This approach retreives relevant specific answers for time related and quantity related question. Precision, recall and F score of the system are 0.35, 0.65 and 0.45 respectively. If the document size is large then there is a possibility that the system may retreive some less relevant information as total no of retrieved information increases. In future the author of this paper would like to do the following tasks: There exists some more answer or information retreival system for some other languages like Chinese, English, Japanese [6][7][[8][9]. So this approach can be compared in future with other existing answer retreival approaches. Finding other question type by “wh” words and retreiving answers accordingly. If any relevant answers starts with pronoun then there is a chance that more relavant answers may exist in other sentences of the text. So finding out more relevant answers if any answers starts with pronoun. VI. REFERENCE Authorized licensed use limited to: Auckland University of Technology. Downloaded on June 04,2020 at 08:04:46 UTC from IEEE Xplore. Restrictions apply. [1] D. Buscaldi, P. Rosso, J. M. Gomez-Soriano, E. Sanchis, "Answering questions with an n-gram based passage retrieval engine," Journal of Intelligent Information Systems, vol. 34.2, pp. 113-134, 2010. [2] E. Brill, S. Dumais, and M. Banko, "An analysis of the AskMSR question-answering system," Proceedings of the ACL-02 conference on Empirical methods in natural language processing, vol. 10, Association for Computational Linguistics, 2002. [3] S. Siddiqi and A. Sharan, "Keyword extraction from single documents using mean word intermediate distance," International Journal of Advanced Computer Research, vol. 6.25, pp. 138, 2016. [4] S. Banerjee and S. Bandyopadhyay, “Bengali question classification: Towards developing qa system,” Proceedings of the</s> |
<s>3rd Workshop on South and Southeast Asian Natural Language Processing, 2012. [5] P. Pakray, P. Bhaskar, S. Banerjee, B. C. Pal, S. Bandyopadhyay, A. Gelbukh, "A Hybrid Question Answering System based on Information Retrieval,” CLEF (Notebook Papers/Labs/Workshop), 2011. [6] L. Zhenqiu, "Design of automatic question answering system base on CBR," Procedia Engineering, vol. 29, pp. 981-985, 2012. [7] E. Sneiders, "Automated question answering using question templates that cover the conceptual model of the database," International Conference on Application of Natural Language to Information Systems, Springer Berlin Heidelberg, 2002. [8] Y. Ke and M. Hagiwara, "An English neural network that learns texts, finds hidden knowledge, and answers questions," Journal of Artificial Intelligence and Soft Computing Research, vol 7.4, pp. 229-242, 2017. [9] T. Sakai, et al, “ASKMi: A Japanese question answering system based on semantic role analysis,” Coupling approaches, coupling media and coupling languages for information retrieval, pp. 215-231, 2004. [10] L. Hirschman and R. Gaizauskas, "Natural language question answering: the view from here," natural language engineering, vol 7.04, pp. 275-300, 2001. [11] A. Andrenucci and E. Sneiders, "Automated Question Answering: Review of the Main Approaches," ICITA (1), 2005. [12] M. Z. Islam, M. N. Uddin and M. Khan, "A light weight stemmer for Bengali and its Use in spelling Checker," Centre for Research on Bangla Language Processing, 2007. Authorized licensed use limited to: Auckland University of Technology. Downloaded on June 04,2020 at 08:04:46 UTC from IEEE Xplore. Restrictions apply.</s> |
<s>Galley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 1Intelligent Decision Technologies -1 (2019) 1–16 1DOI 10.3233/IDT-180074IOS PressA novel Bengali Language Query ProcessingSystem (BLQPS) in medical domainKailash Pati Mandala,∗, Prasenjit Mukherjeea, Baisakhi Chakrabortya and Atanu ChattopadhyaybaDepartment of Computer Science and Engineering, National Institute of Technology, Durgapur, IndiabDepartment of BBA (H) and BCA (H), Deshabandhu Mahavidyalaya, Chittaranjan, IndiaAbstract. Bengali is the seventh most widely spoken language in the world. Many researchers are working on developing Ben-gali language based information retrieval, question-answering, query-response systems. The proposed Bengali Language QueryProcessing System (BLQPS) is based on natural language query-response model. Bengali language has been used in the modelto extract knowledge data from a default database. The system is based on scoring and pattern generation algorithm that is ableto generate structure query language (SQL) from natural language query in Bengali with the help of a synonym database. Theproposed system is domain based and a large number of words have been initialized in the synonym database. The SQL is for-mulated from semantic analysis. Further, the generated SQL has been used to extract knowledge data in Bengali language fromthe default database.Keywords: Query-response, scoring and pattern generation based algorithm, Structure Query Language (SQL), Semantic analysis,Bengali Language Query Processing System (BLQPS), Natural Language Processing (NLP)1. Introduction1The 21st century is the current century of human-2computer interaction era. The main aim of this dis-3cipline has to interact with computerized system us-4ing less effort. The Natural language processing (NLP)5plays a very important role in human-computer inter-6action. The humans can understand natural language7whereas computerized system can understand machine8understandable language. As a result the naive user is9not able to access the computerized system by their na-10tive language. The NLP is a technique which converts11the human understandable language to a machine un-12derstandable language form. As a result the naive user13can be able to access the computerized system by their14native language without knowing the details of conver-15sion technique. There is a substantial amount of work16that has already been done on natural language inter-17∗Corresponding author: Kailash Pati Mandal, Department ofComputer Science and Engineering, National Institute of Technol-ogy, Durgapur, West Bengal, India. Tel.: +91 7407367323; E-mail:biltu.cse@gmail.com.face to database. Different researchers have applied dif- 18ferent techniques. The conversion of natural language 19to SQL can be done through Morphological Analysis, 20Syntactic Analysis, Semantic Analysis, Discourse inte- 21gration and Pragmatic Analysis [3]. Some researchers 22have proposed the Natural Language Query Process- 23ing (NLQP) system as an interface to database system 24using semantic grammar [7]. A Hindi language based 25graphical user interface for transport system is devel- 26oped using a set of predefined rules [8]. Another Hindi 27language interface for database based on karaka theory 28generates SQL by comparing each token with knowl- 29edge base [9]. There are many government awareness 30campaigns (Health, Education etc.) undertaken by gov- 31ernment in various portals or e-platforms in English 32which majority of the citizens may not be able to ac- 33cess or understand due to language barriers as mostly 34Indians speaking vernacular languages are not comfort- 35able in English. Bengali is one of the vernacular lan-</s> |
<s>36guages. Bengali language has been used in important 37states of India such as West Bengal, Tripura, Assam 38and Andaman Nicobar Islands. The national language 39of Bangladesh is also Bengali. The proposed system 40ISSN 1872-4981/19/$35.00 © 2019 – IOS Press and the authors. All rights reservedrrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 22 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainis a query response model in the medical domain that41shall able to process medical related query where the42query is accepted in Bengali. The system is aimed at43overcoming language barrier. The system’s synonym44database consists of two tables, one is entities another45is attributes table. The default database consists of three46tables. These tables are hospital, doctor and department47table. The user can post a query in Bengali language.48The parts of Speech (POS) tagging is done by scoring49method. Then the system generates all possible patterns50of unknown words. This generated pattern is compared51with synonym database and populated in the seman-52tic table for semantic analysis. This semantic analysis53helps to construct the SQL of default database. Finally54formal SQL is executed by system and fetch the de-55sired result. No Adjective is used in the proposed sys-56tem architecture pertaining to medical domain because57no qualitative query shall be posted in the system. This58is the limitation or constraint of the system.592. Related works60Substantial amount of work has been done since last61few decades on natural language processing. The Re-62view on Natural Language Processing research papers63addresses the challenges between natural language and64computing device. The natural language processing ap-65plications are based on Phonology, Morphology, Se-66mantics and Pragmatics. Phonology depends on sound67of speech of speaker. Morphology is the structural68study of word that locates the root word. The Semantic69analysis expresses the textual meaning of the sentence70without context. The Pragmatic analysis expresses the71meaning of the sentence within context. Natural lan-72guage processing (NLP) is a field of study of interac-73tion with computer by using human language. A wide74range of NLP based system has been developed by us-75ing mathematical and computational modeling of var-76ious aspects of language. NLP is a technique through77which the computing device can understand natural78language text or speech. Some NLP based applica-79tions aremachine translation, natural language text pro-80cessing and summarization, user interfaces, multilin-81gual and cross language information retrieval (CLIR),82speech recognition, artificial intelligence (AI) and ex-83pert systems are discussed in [1]. Interface between nat-84ural language and database is also a hot topic of re-85search. This intelligent interface is designed for naive86user who does not have any knowledge of database.87An intelligent interface for relational database system88converts the English language query to SQL using se- 89mantic matching, data dictionary and a set of pro- 90duction rules together has been defined in [2]. Con- 91version of human language to a formal language like 92SQL through different phases of analysis like Morpho- 93logical Analysis, Syntactic Analysis, Semantic Analy- 94sis, Discourse integration and Pragmatic Analysis has 95been done in paper [3]. The Prolog programming lan- 96guage based question answering system named Chat- 9780 internally represents the meaning of English ques- 98tions by a</s> |
<s>set of Prolog programming logic. Finally 99the answer is fetched by executing the Prolog logic 100as discussed in [4]. The EFLEX system is an effi- 101cient database interface system that consists of ana- 102lyzer, mapper and translator. The analyzer interprets 103the given natural language query for the mapper. The 104mapper maps the natural language to its correspond- 105ing SQL. Finally the translator forms the query cor- 106rectly. The efficiency of the EFLEX system has been 107improved by using Knuth-Morris-Pratt algorithm that 108is explained in [5]. The Knowledge Management Sys- 109tem (KMS) is query response tool where the user can 110post the query in English language into the KMS. Then 111the KMS retrieves the data from default database us- 112ing a set pre-defined grammar rules and semantic anal- 113ysis as discussed in [6]. The NLQP as in [7] reduces 114extra overhead of complex SQL. This NLQP consists 115of four modules. These modules are Analyzer Module, 116Parser Module, Query Builder Module and code op- 117timizer Module. The Analyzer Module tokenizes the 118English language query into keys after which these to- 119kenized keys are sent to the Parser Module and then 120the Parser Module combines these tokens and per- 121forms syntactic analysis. The Query Builder Module 122forms the SQL query using parsing information. Fi- 123nally the Code Optimizer Module fetches the data in 124efficient way is implemented in [7]. A Hindi language 125based interface for transport system is developed for 126native Hindi speaker where the data is retrieved from 127Hindi database using Hindi language query as in [8]. In 128this system, SQL statements like insert, update, delete, 129MIN(), MAX(), SUM() and AVG() are implemented 130in [8]. In [9], a Hindi language interface for database 131using Karaka theory has been developed which is very 132useful for native Hindi users. The proposed system di- 133vides the Hindi Language query into number of tokens. 134The shallow parser removes useless tokens. The Case 135Solver forms a newHindi language query which is con- 136sists of base words. The shallow parser uses POS type 137verb to determine the SQL command. The token which 138is present before the case symbol is treated as a table 139rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 3K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 3name and the token which present after the case symbol140is treated as attribute name. Some condition start tokens141or words as mentioned in Section 3(d) of [9] are present142in the proposed system which has helped to construct143the condition part of the SQL query. Then the graph144generator represents the relationship among command,145table name, attribute name and conditional part. The146query translator converts Hindi token to its correspond-147ing English token using knowledge base. Finally Query148Executor executes the SQL query and retrieves desired149data from database that has been discussed in [9]. The150Natural Language Interface to the Database (NLIDB)151based on ontology as in [10] has produced better result152than any other existing NLIDB. In this system, the se-153mantic representation is done by using Ontology Web154Language (OWL)</s> |
<s>for knowledge modeling which in-155creases the correct responses of user’s query that is ex-156plained in [10]. In [11] deals with structural ambiguity157of a Bengali sentence. The given sentence is tokenized158by the Tokenizer. Then the validator checks grammati-159cal mistake in the sentence. Whereas the En-converter160converts the given sentence into a Universal Network-161ing Language expression using Dictionary Entry-look-162up, rules of morphological analysis and semantic anal-163ysis. This technique has been discussed in [11]. The164proposed system identifies the Bangla grammar using165predictive parser. In this system the parser uses the top166down technique. The proposed system uses the pre-167defined XML dictionary for parts of speech tagging.168The access time of XML file is much lesser than other169file format. The parse table is generated using context170free grammar in this system. The proposed parser has171been discussed in [12]. The Syntax Analysis and Ma-172chine Translation of Bangla Sentences system in [13]173can able to convert all types of Bangla sentence to En-174glish sentence using pre-defined grammar rules. The175parsing is done in this proposed system through differ-176ent steps. The system tokenizes the user given Bengali177sentence. Then the system counts the number of tokens.178After that the proposed system checks given sentence’s179length and pre-defined rules length. If both the length180is matched then corresponding phrases is retrieved and181parse tree is generated. TheBangla to English converter182converts the Bangla sentence to English sentence by183using training corpus which selects the word to form184the sentence which probability is to maximum. The185proposed system is discussed in [13]. The Rule Based186Bengali Stemmer is a proposed system which derives187the Bengali root word by removing the affix from a188given word. The proposed system categorizes all words189in two parts either verbal affix or nominal affix. The190Rule Based Bengali Stemmer checks every letter of a 191word. If affix is present then remove the affix and find 192the stem word. The above mentioned process uses for 193both verbal inflection and nominal inflection. The pro- 194posed system is implemented in [14]. The proposed 195system tokenizes the given English language’s query. 196Then all tokens are passes through automata. Automata 197remove articles, connectors and extract correct pattern. 198Automata substitute the keyword with proper attribute. 199Finally automata map the value which corresponds to a 200particular attribute as well as a table also. If the two or 201more tables are associated with the query then the au- 202tomata joins up the tables. In these way automata build 203up the SQL from the English language query is dis- 204cussed in [15]. In [16], tense based English to Bangla 205translation system has been implemented which con- 206vert the English sentence into corresponding Bangla 207sentence. The proposed system verifies the syntactical 208correctness using context-free grammar whereas bot- 209tom up approach is used to generate parse tree for the 210given sentence. The proposed system consists of To- 211kenizer, Syntax Analyzer, Grammatical Rule Gener- 212ator, Lexicon, Parse Tree and Conversion Unit. Tok- 213enizer tokenizes the given sentence and sends to the 214Syntax Analyzer. Then the Syntax Analyzer compares 215each token with lexicon. If the token does not match 216with lexicon then token is invalid otherwise find</s> |
<s>out 217POS type andBengali meaning of the token. TheGram- 218matical Rule Generator contains a set of pre-defined 219production which useful to form a correct parse tree. 220The Conversion Unit converts the English parse tree 221into corresponding Bengali parse tree. Finally Ben- 222gali sentence prints. This system is discussed in [16]. 223Link data, SPARQL language and interface in natural 224language may be an interesting solution for accumu- 225late and disseminate biomedical knowledge in biomed- 226ical area. Researchers may have difficulty to handle 227SPARQL language where natural language interface 228helps to life science researchers for extraction biomed- 229ical knowledge from biomedical knowledge base as 230in [21]. Non-expert users don’t have ability to access 231huge data repository. Natural language interface to web 232data services are working as an immerging technology 233to non-expert users for accessing huge data repository 234has been discussed in [22]. Natural language interface 235are crucial part of semantic knowledge representation 236system where understanding of formal representation 237language to model in a particular domain is a diffi- 238cult task for users. Authors have introduced a seman- 239tic wiki system that is based on controlled natural lan- 240guage interface that uses attempto controlled English in 241rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 44 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domaingrammatical frame work. The grammatical frame work242helps to manage other natural language (multi lingual)243queries as in [23]. Lot of research work has already244been done in parsing, parts of speech tagging, stem-245ming, sentiment analysis in vernacular languages like246Hindi, Bengali Assamese. But very little amount re-247search work has been done on query processing in Ben-248gali natural language. So, there is a need to develop249query systems in local or vernacular languages. Sine250majority of users in India are living in rural or sub-rural251areas are not very comfortable with English language,252such systems may help the rural or backward areas to253use such systems with vernacular language interface to254handle queries. It has been found that rural areas mostly255require information in domains like medical fields, ed-256ucation, agricultural information etc. This work dis-257cusses on a Bengali Language Query Processing Sys-258tem on medical domain as to aid or facilitate the ru-259ral people or people in backward areas to access medi-260cal information pertaining to their village/district/state.261The rest of the research paper is organized as follows.262The Section 2 discusses the literature reviews on re-263lated works. The Section 3 gives the architecture of264the BLQPS. The Section 4 explains methodology and265tools used. Then the Section 5 discusses general fea-266tures based comparative study of the proposed system267with similar type system. The Section 6 discusses con-268clusion and future works of the proposed system.2693. Architecture of the BLQPS270The architecture of the proposed system has been271given in Fig. 1.2723.1. Algorithm273The block diagram of algorithmic step has been274given in Fig. 2.2753.1.1. Log in into the system and post query in276Bengali language277The user will log in into the proposed system and278will post the query in Bengali language. A query has279been given below as an example.280নিদয়া ও পু িলয়ায় িক িক হাসপাতাল</s> |
<s>আেছ?281(nothyea o purulyeki ki haspathal ache?)282That means in English Language283What are hospitals available in Nadia and Purulia?2843.1.2. Query tokenization285The proposed system reads the query and slices286it into meaningful linguistic units called tokens after287Fig. 1. Data flow diagram of BLQPS.Fig. 2. Natural language query to SQL generation steps.removal of punctuation marks. These tokens will be288stored in a string array. The pictorial representation of 289user given query after tokenization has been given in 290Table 1. 291rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 5K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 5Table 1Query String array for NL query after tokenizationArray index 0 1 2 3 4 5 6The string array of given query নিদয়া (nothyea) ও (o) পু িলয়ায় (purulye) িক (ki) িক (ki) হাসপাতাল (haspathal) আেছ (ache)(What) (What) (Hospital) (Available)Table 2Score array with score value after POS taggingArray index 0 1 2 3 4 5 6Score array 1 5 1 2 2 1 13.1.3. POS-tagging 292The BLQPS contains few stop words which are pre-293defined list of Bengali words like interrogative words,294pronouns, prepositions and conjunctions. These pre-295defined Bengali words (stop words) have been stored296in string arrays. The pre-defined words are termed as297known words and remaining all other words are treated298as unknown words. A numeric value has been assigned299to each string array that contains predefined words300termed as score. The score of string array of interroga-301tive words, pronouns, preposition and conjunction are3022, 3, 4 and 5 respectively. The score of all unknown303words are 1. The pre-defined string arrays have been304given below with score.305i. list_interrogative[ ] = {িক (ki) (What), কাথায়306(kothae) (Where), কন (keno) (Why)…} and score307is 2.308ii. list_pronoun[ ] = {আিম (ami) (I), আমরা (amra)309(We),আমােদর (amader) (Our)…} and score is 3.310iii. list_preposition[ ] = { ারা (d”ara) (By), উপের311(upre) (Above), িনেচ (niche) (Under)…} and score312is 4.313iv. list_conjunction[ ] = {এবং (ebong) (And), ও (o)314(And),আর (ar) (And)…} and score is 5.315When a query is posted in Bengali language, it is first316tokenized. The tokens may contain predefined words317and unknown words. The tokens are placed in a query318string array. The unknown words have a score of 1.319Each of the known tokens of the query string array320will be compared with predefined words stored in pre-321defined string arrays as mentioned in List i through322iv, wherein, list i corresponding to interrogative has323score 2, list ii corresponding to list pronoun has score 3,324list iii corresponding to list preposition has score 4 and325list iv corresponding to list conjunction has score 5. If326the corresponding token of query string array matches327with any predefined word in predefined string array328(list i through iv), then score of corresponding string329array will be assigned to the token of the query string330that is compared to the string array containing prede-331fine words list as in i through iv). Thereby, a score array332(as shown in Table 2) is derived from the query string 333array(shown in Table 1). The score array length is same 334as token array. The BLQPS maintains this score array 335to keep track the score of all token(s) after POS</s> |
<s>tagging. 336The BLQPS selects every token from the tokenized 337query and compares with each word of interrogative’s 338word list. If token matches with any word of interroga- 339tive word’s list then the score of the selected token will 340be same as score of the interrogative’s word list i.e. 2. 341Otherwise token will be compared with each word of 342pronoun’s list. If token matches with any word of pro- 343noun’s list then the score of the selected token will be 344same as score of the pronoun’s list i.e. 3. Otherwise in 345the similar way token will be compared with preposi- 346tion’s list and conjunction’s list respectively. If token 347matches, then corresponding array list score will be as- 348signed. If the selected token does not match with any 349of the above mention known word list, then the pro- 350posed system will determine that the selected token is 351unknown and score will be 1. Using above mentioned 352procedure the proposed system will assign a specific 353score in the score array after POS tagging. The score 354value of first token from token array will be assigned 355at 0th index in score array. The next score value of 356second token from token array will assigned at 1st in- 357dex in score array and other score value of token(s) or 358word(s) will be assigned at the position of score ar- 359ray as per their array indexing number in token array. 360The proposed system simply ignores known token(s) 361or word(s) by identifying score value other than 1. The 362BLQPS keeps track of all the consecutive and noncon- 363secutive unknown words which are a very useful com- 364ponent for pattern generation. The user given query af- 365ter POS-tagging and score value calculation has been 366given in Table 2. 3673.1.4. Pattern generation without replacement of 368token position 369In this step, the BLQPS generates all possible pat- 370terns of unknown tokens. The main objective of pat- 371tern generation is composition creation of two or more 372than two unknown words. The proposed system rec- 373ognizes as unknown token whose score is 1. The un- 374known tokens or words which are not consecutive or 375which are separated by another known token or word 376rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 66 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainTable 3Semantic tableId Entity_name Attribute_name Primary_key Foreign_key Candidate_key Value1 Hospital hos_district hos_id নিদয়া (nothye2 Hospital hos_district hos_id পু িলয়ায় (purulye3 Hospital hos_idTable 4Desired result after processing NL queryHos_id Hos_name Hos_add Hos_district Hos_state2 নিদয়া জলা হাসপাতাল (Nadia District Hospital) চাপরা (Chapra) নিদয়া (nothyea) পি মব (West Bengal)3 পু িলয়া জলা হাসপাতাল (Purulia District Hospital) রঘনূাথপুর (Raghunathpur) পু িলয়া (purulye) পি মব (West Bengal)Table 5Entities table of synonyms databaseEntity_id Entity_name Synonyms Primay_key Foreign_key Candidate_key1 Hospital হাসপাতাল (Hospital) hos_id NULL NULL2 Hospital বদ শালা (Hospital) hos_id NULL NULL3 Hospital ি িনক (Hospital) hos_id NULL NULL4 Hospital া িনবাস (Hospital) hos_id NULL NULL5 Hospital আরগ শালা (Hospital) hos_id NULL NULL6 Hospital আরগ িনেকতন (Hospital) hos_id NULL NULL7</s> |
<s>Department িবভাগ (Department) dept_id hos_id NULL8 Department শাখা (Department) dept_id hos_id NULL9 Department অংশ (Department) dept_id hos_id NULL10 Department দ র (Department) dept_id hos_id NULL11 Doctor ডা ার (Doctor) doc_id hos_id dept_id12 Doctor িচিক�সক (Doctor) doc_id hos_id dept_id13 Doctor বদ (Doctor) doc_id hos_id dept_idshall have only one pattern. But, when the unknown to- 377kens are consecutive and not separated by any known378token or word, they shall generate pattern(s) of tokens379or words, where the order of the tokens or words will380not be changed in the pattern. The known token(s) or381word(s) will not be considered for pattern generation382because the proposed systemwill generate the semantic383table using unknown token(s) or word(s).384The query নিদয়া ও পু িলয়ায় িক িক হাসপাতাল আেছ?385(nothyea o purulyeki ki haspathal ache?) (What are386hospitals available in Nadia and Purulia?) has been to-387kenized and shown in Table 1. After POS tagging, the388BLQPS maps known and unknown token or word by389considering index of token array and score array as well390as score value of the score array. The nonconsecutive391unknown word generates only one pattern. Here নিদয়া392(nothyea), পু িলয়ায় (purulye) are two nonconsecu-393tive unknown words because there is a known word ও394between them. So the token নিদয়া (nothyea) will be the395single pattern. Similarly the token পু িলয়ায় (purulie)396will be another single pattern. But two consecutive397unknown tokens or words are হাসপাতাল (haspathal)398and আেছ (ache) (Available). So these two consecu-399tive unknown tokens shall generate more than one pat-400tern. The token order occurrence is maintained in the401generated pattern which is made up of two or more 402than two tokens. The generated pattern হাসপাতাল (has- 403pathal) আেছ (ache) (Available) is made up of two to- 404kens. The token হাসপাতাল (haspathal) occurs before the 405tokenআেছ (ache) (Available). That why the pattern হা- 406সপাতাল আেছ (haspathal) (ache) (Hospital Available) 407will be generated and the pattern আেছ (ache) (Avail- 408able) হাসপাতাল (haspathal) will not be generated by the 409BLQPS. All generated pattern has been given below. 410i. নিদয়া (nothyea) 411ii. পু িলয়ায় (purulie) 412iii. হাসপাতাল (haspathal) (Hospital) 413iv. আেছ (ache) (Available) 414v. হাসপাতাল আেছ (haspathal) (ache) (Hospital) 415(Available) 4163.1.5. Semantic analysis 417The BLQPS contains two databases. One is syn- 418onym database and other is default database. The syn- 419onym database contains entities and attributes tables. 420Table 5 represents entities table and Table 6 represents 421attributes table. The default database contains Hospital 422rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 7K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 7Table 6Attributes table of synonyms databaseAttribute_id Entity_ name Attribute_name Synonyms Primary_ key Foreign_ key Candidate_key1 hospital hos_id হাসপাতােলর মাণপ (Hospital’s Identity) hos_id2 hospital hos_name হাসপাতােলর নাম (Hospital’s Name) hos_id3 hospital hos_add হাসপাতােলর িঠকানা (Hospital’s Address) hos_id4 hospital hos_district হাসপাতােলর জলা (Hospital’s district) hos_id5 hospital hos_state হাসপাতােলর রাজ (Hospital’s state) hos_id6 doctor doc_id ডা ােরর ন র (Doctor’s Identity) doc_id hos_id dept_id7 doctor doc_name ডা ােরর নাম (Doctor’s Name) doc_id hos_id dept_id8 doctor doc_qualification যাগ তা (Qualification) doc_id hos_id dept_id9 doctor doc_specialist িবেশষ (Specialist) doc_id hos_id dept_id10 department dept_id িবভাগ ন র (Department’s Identity) dept_id hos_id11</s> |
<s>department dept_name িবভাগ নাম (Department’s Name) dept_id hos_idTable 7Hospital table of default databaseHos_id Hos_name Hos_add Hos_district Hos_state1 মুিশদাবাদ জলা হাসপাতাল (Murshidabad District Hospital) লালেগালা (Lalgola) মুিশদাবাদ (Murshidabad) পি মব (West Bengal)2 নিদয়া জলা হাসপাতাল (Nadia District Hospital) চাপরা (Chapra) নিদয়া (Nadia) পি মব (West Bengal)Table 8Doctor table of default databaseDoc_id Doc_name Doc_qualification Doc_specialist Hos_id Dept_id1000 কয়া গরাই (Keya Gorai) এম.িব.িব.এস. (M.B.B.S.) অপথ ালেমালিজ িবেশষ (Ophthalmologist) 1 101010 শমীতা দাশ (Shamita Dasgupta) এম.িড. (M.D.) অি িবেশষ (Orthopaedist) 2 20Table 9Department table of default databaseDept_id Dept_name Hos_id10 অপথ ালেমালিজ িবভাগ 1(Ophthalmology Department)20 অি িচিক�সা িবভাগ 2(Orthopedic Department)(Table 7), Doctor (Table 8) and Department (Table 9) 423tables. The pattern (নিদয়া, পু িলয়ায়, হাসপাতাল,আেছ, হা-424সপাতাল আেছ) will be generated by the BLQPS in step425iv. Each pattern may be an entity name, an attribute426name or a value. Each pattern will be checked with en-427tities table. If the corresponding pattern matches with428any value of synonyms field in entities table, then cor-429responding row values will be fetched like entity name,430primary key, foreign key, candidate key except syn-431onyms value and the corresponding row value will be432inserted into the semantic table; else the pattern will go433for matching in attributes table. If the corresponding434pattern matches with any value of synonyms field in435attributes table then corresponding row values will be436fetched like entity name, attribute name, primary key,437foreign key and candidate key except synonyms value438and the system will insert these into the semantic ta-439ble else the pattern will go to the default data base for440matching.441If the pattern matches with any value in any table in442the default database then the column name of the corre-443sponding table will be selected and the attributes table444of the synonyms database will be searched again cor-445responding to the column name wherein the entire row446values with corresponding columns name are selected.447i. নিদয়া (nothyea) – This will be selected as default448database value and corresponding row value from449attributes table will be fetched. 450ii. পু িলয়ায় (purulie) – This will be selected as de- 451fault database value and corresponding row value 452from attributes table will be fetched. 453iii. হাসপাতাল (haspathal) (Hospital) – This will be 454selected as entity from enttities table and corre- 455sponding row value will be fetched from entity 456table. 457iv. আেছ (ache) (Available) – This will not be se- 458lected. 459v. হাসপাতাল আেছ (haspathal) (ache) (Hospital) 460(Available) – This will not be selected. 461After completion of synonyms database matching, 462corresponding row values of নিদয়া (nothyea), পু িলয়ায় 463(purulie), হাসপাতাল (haspathal) from attributes and 464entities tables will be fetched and inserted into the se- 465mantic table except value of synonyms attribues from 466both table. 467rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 88 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainThe representation of above mention query after se-468mantic analysis has given in Table 3.4693.1.6. SQL generation470In this phase, the BLQPS generates SQL from the471semantic table. The general format for retrieving data472from table(s) is SELECT attribute 1, attribute 2, at-473tribute 3… attribute n FROM entity 1 (table 1), entity 2474(table 2),</s> |
<s>entity 3 (table 3)… entity n (table n)WHERE475condition 1 and condition 2 and condition 3… and con-476dition n. SELECT, FROM and WHERE clauses are477fixed in the structure of a SQL query for retrieving data,478that why the proposed system has to find attribute(s),479entities and condition(s). There are several cases for480finding attribute(s), entities and condition(s).481Case i: The system identifies attributes if the at-482tribute_name field is NOT NULL, value field is NULL483and entity_name field is also NOT NULL; then values484in the attribute_name field of Semantic Table shown in485Table 3 is treated as attribute(s). The system concate-486nates those attribute(s) with their corresponding entity487name by “.” operator.488Case ii: The BLQPS considers all attributes if the489attribute_name field is NULL, value field is NULL and490entity_name field is NOT NULL in the semantic table.491The system concatenates the “*” to their corresponding492entity name by “.” operator.493Case iii: If the attribute_name field is NOT NULL494and value field is also NOT NULL then whatever at-495tributes are contained in the attribute_name field is496treated as condition. The system concatenates those at-497tribute(s) followed by “=” and value with their corre-498sponding entity name by “.” operator.499Case iv: The BLQPS finds entity name from the500value of entity_name field in semantic table. In case501the entity_name field contains duplicate value, the pro-502posed system selects distinct entity name.503Case v: If the value of primary_key field of one en-504tity matches with the value of foreign_key or candi-505date_key of other entity in the semantic table then the506system will perform joining operation between these507two entities.508Case vi: If two or more entries in value field is509NOT NULL and their corresponding attribute_name,510entity_name field contains same value, it means the511particular attribute of an entity has a list of values. In512this case system will use IN clause.513Case vii: Single entry in value field is NOT NULL514and their corresponding attribute_nameentity_515namefield is also NOT NULL. In this case system will516use “=” operators.517Case viii: If two or more condition exists, the system 518concatenates all condition(s) using AND. 519After SQL generation the above mentioned query 520i.e. নিদয়া ও পু িলয়ায় িক িক হাসপাতাল আেছ? (nothyea 521o purulyeki ki haspathal ache?) will be converted 522to following SQL SELECT hospital.* FROM hospi- 523tal WHERE hospital.hos_district IN (‘নিদয়া’ (nothyea), 524‘পু িলয়ায়’ (purulye)). 5253.1.7. SQL processed by system 526Finally the SQL is executed and the desired result is 527fetched from the default database by the BLQPS. The 528result has been given in tabular format in Table 4. 5293.2. Knowledge representation of the BLQPS 530The knowledge data has been stored in default 531database. The default database has been used for knowl- 532edge extraction using natural language query. The 533Database Administrator, Knowledge Administrator, 534System Administrator or any other resource person 535may update the proposed knowledge database. The 536synonyms database has been used to generate the se- 537mantic table. The synonym database contains entities 538table and attributes table. The default database contains 539of hospital table, doctor table and department table. 5403.2.1. Synonyms database 5413.2.1.1. Structure of entities table 542The entities table consists of six fields. These fields 543are entity_id, entity_name, synonyms, primary_key, 544foreign_key and candidate_key. In this</s> |
<s>table, entity_id 545field is the primary key. Entity_name field contains 546participating entity name corresponding to table names 547of default database like hospital, doctor and depart- 548ment. The synonyms field contains all possible syn- 549onyms of the entities in Bengali language. The pri- 550mary_key field contains the name of primary key field 551of their respective entity. Similarly the foreign_key and 552candidate_key fields contain the name of the foreign 553key and candidate key field of their corresponding en- 554tity if value corresponding to foreign_key and candi- 555date_key exists, otherwise they shall be NULL respec- 556tively. The structure of entities table has been given in 557Table 5. 5583.2.1.2. Structure of attributes table 559The attributes table consists of seven fields. These 560fields are attribute_id, entity_name, attribute_name, 561synonyms, primary_key, foreign_key and candidate_ 562rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 9K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 9Table 10Query String array for NL query after tokenizationTokens বাকুড়া পু িলয়া ও (o) নিদয়ার হাসপাতােলর মেধ অি িচিক�সা িবভাগ কাথায় কাথায় আেছ(bãnkoora) (puruliea) (And) (nothyar) (haspathaler) (mothe) (osthi) (chykitsa) (bybhag) (kothae) (kothae) (ache)(Orthopedic) (Treatment) (Department) (Where) (Where) (Available)Array index 0 1 2 3 4 5 6 7 8 9 10 11Fig. 3. Natural language query in BLQPS.key. In this table, attrubute_id field is the primary 563key. The entity_name contains the entity name (Ta- 564ble name) of default database. The attribute_name field565contains all attributes of entities in default database.566The synonyms field contains all possible synonyms567of attributes of entities. Synonym words have been568stored in Bengali language. The primary_key field con-569tains the name of primary key field of corresponding570entity. Similarly the foreign_key and candidate_key571fields contain the name of foreign key and candidate572key fields of corresponding entity if the value of them573exists; otherwise they shall be NULL respectively. The574structure of attributes table has given in Table 6.5753.2.2. Default database5763.2.2.1. Structure of hospital table577The hospital table consists of five fields. These fields578are hos_id, hos_name, hos_add, hos_district, hos_state.579The hos_id field is the primary key of this table. Us-580ing this hos_id field the BLQPS uniquely identify each581entity instance of the table. The hos_name field con-582tains hospital name, hos_add field contains the hospital583address, hos_distrct field contains district name where584hospital is situated. Similarly hos_state field contains585state name where the hospital is located. The structure586of hospital table has given below in Table 7.5873.2.2.2. Structure of doctor table588The doctor table consists of six fields. These fields589are doc_id, doc_name, doc_qualification, doc_ special-590ist, hos_id and dept_id. The structure of doctor table591has given in Table 8.5923.2.2.3. Structure of department table593The department table consists of three fields. These594fields are dept_id, dept_name, hos_id. The structure of595department table has given in Table 9.5964. Methodology and tools used597HTML, PHP, MySQL and Avro Bengali software598have been used to develop the proposed system. HTML 599has been used as front end to design the web pages 600structure. PHP is a server side scripting language that 601has been used in back end. The knowledge database 602(default database) and synonyms database has been 603implemented in MySQL. All Bengali queries</s> |
<s>in Ben- 604gali transcript has followed IPA notation as per Help: 605IPA/Bengali given in website https://en.wikipedia.org/ 606wiki/Help:IPA/Bengali 6074.1. Step i 608The Bengali Language Query Processing System 609(BLQPS) is a domain specific natural language query 610processing system. The system has been designed to 611handlemedical related queries in Bengali. The user will 612log into the system and will post a query in Bengali. 613The query window of the proposed system has given in 614Fig. 3. 615For example the user posts a query in Bengali. The 616query has been given. 617বাকুড়া, পু িলয়া ও নিদয়ার হাসপাতােলর মেধ অি িচ- 618িক�সা িবভাগ কাথায় কাথায় আেছ? (bãnkoora, puruliea 619o nothyar haspathaler mothe osthi chykitsa bybhag 620kothaekothaeache?) (Where is the orthopedic depart- 621ment available among Bankura, Purulia and Nadia’s 622Hospital?). 623The BLQPS tokenizes the query into twelve tokens 624and stores them into a query string array after removing 625punctuation marks. 626The array of tokens and array index of given query 627string after tokenization has been given in Table 10. 628rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 1010 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainTable 11Score array with score valueArray index 0 1 2 3 4 5 6 7 8 9 10 11Score array 1 1 5 1 1 4 1 1 1 2 2 14.2. Step ii 629After tokenization, The BLQPS selects every token 630from the tokenized query and compares with each word631of interrogative’s word list. If token matches with any632word of interrogative’s word list then the score of the633selected token will be same as score of the interroga-634tive’s word list i.e. 2. Otherwise tokenwill be compared635with each word of pronoun’s list. If token matches with636any word of pronoun’s list then the score of the se-637lected token will be same as score of the pronoun’s list638i.e. 3. Otherwise in the similar way token will be com-639pared with preposition’s list and conjunction’s list re-640spectively. If token matches then corresponding array641list score will be assigned. The selected token is not642matched with any of the above mention known word643list then the proposed system will determine the se-644lected token is unknown and score will be 1. Using645above mentioned procedure the proposed system will646assign a specific score in the score array after POS tag-647ging. The score value of first token from token array648will be assigned at 0th index in score array. The next649score value of second token from token array will as-650signed at 1st index in score array and other score value651of token(s) or word(s) will be assigned at the position of652score array as per their array indexing number in token653array. The proposed system simply ignores known to-654ken(s) or word(s) by identifying score value other than6551. The score will be calculated by the BLQPS after POS656tagging.657The score of all tokens of user given has been given658in Table 11.6594.3. Step iii660বাকুড়া, পু িলয়া ও নিদয়ার হাসপাতােলর মেধ অি িচিক-661�সা িবভাগ কাথায় কাথায় আেছ?662(bãnkoora, puruliea o nothyar haspathaler mothe os-663thi chykitsa bybhag kothaekothaeache?) (Where is the664orthopedic department available among Bankura, Pu-665rulia</s> |
<s>and Nadia’s Hospital?), has been tokenized and666shown in Table 10. After POS tagging the BLQPSmaps667known and unknown token or word by considering in-668dex of query string array(token array) and score ar-669ray as well as score value of the score array. The non-670consecutive unknown word generates only one pattern.671Here আেছ (ache) is one nonconsecutive unknown to-672ken or word. So the tokenআেছ (ache) will be the single673pattern. But two consecutive unknown tokens or words 674are বাকুড়া (bãnkoora) and পু িলয়া (puruliea). So these 675two consecutive unknown tokens shall generate more 676than one pattern. The token order occurrence is main- 677tained in the generated pattern which is made up of two 678or more than two tokens. The generated pattern বাকুড়া 679পু িলয়া (Bakuɽa puruliea) is made up of two tokens. 680The token বাকুড়া (bãnkoora) occurs before the token পু- 681িলয়া (puruliea). That why the pattern বাকুড়া পু িলয়া 682(Bakuɽa puruliea) will be generated and the pattern পু- 683িলয়া বাকুড়া (puruliea Bakuɽa) will not be generated 684by the BLQPS. Similar way all consecutive tokens will 685generate patterns. All generated pattern has been given 686below. 687i. বাকুড়া (bãnkoora) 688ii. পু িলয়া (puruliea) 689iii. বাকুড়া পু িলয়া (Bakuɽa puruliea) 690iv. নিদয়ার (nothyar) 691v. হাসপাতােলর (haspathaler) 692vi. নিদয়ার হাসপাতােলর (nothyar haspathaler) 693vii. অি (osthi) (Orthopedic) 694viii. িচিক�সা (chykitsa) (Treatment) 695ix. িবভাগ (bybhag) (Department) 696x. অি িচিক�সা (osthi chykitsa) (Orthopedic Treat- 697ment) 698xi. িচিক�সা িবভাগ (chykitsa bybhag) (Treatment De- 699partment) 700xii. অি িচিক�সা িবভাগ (osthi chykitsa bybhag) (Or- 701thopedic Department) 702xiii. আেছ (ache) (available) 7034.4. Step iv 704TheBLQPSwill compare each pattern (বাকুড়া (bãnk- 705oora), পু িলয়া (puruliea), বাকুড়া পু িলয়া (Bakuɽa puru- 706liea), নিদয়ার (nothyar), হাসপাতােলর (haspathaler), নিদ- 707য়ার হাসপাতােলর (nothyar haspathaler),অি (osthi) (Or- 708thopedic), িচিক�সা (chykitsa) (Treatment), িবভাগ (byb- 709hag) (Department), অি িচিক�সা (osthi chykitsa) (Or- 710thopedic Treatment), িচিক�সা িবভাগ (chykitsa byb- 711hag) (Treatment Department), অি িচিক�সা িবভাগ (os- 712thi chykitsa bybhag)(Orthopedic Department), আেছ 713(ache) (available) with synonym database as well as 714default database and insert into semantic table the 715matched value when a match occurs. 716i. বাকুড়া (bãnkoora) – This will be selected as de- 717fault database value and corresponding row value 718from attributes table will be fetched. 719rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 11K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 11Table 12Instances of semantic tableId Entity_name Attribute_name Primary_key Foreign_key Candidate_key Value1 hospital hos_district hos_id বাকুড়া (bãnkoora)3 hospital hos_district hos_id পু িলয়া (purulie7 hospital hos_district hos_id নিদয়া (nothyar)8 hospital hos_id11 department dept_id hos_id14 department dept_name dept_id hos_id অি িচিক�সা িবভাগ (osthi chykitsa bybhag) (Orthopedic department)Fig. 4. Conversion of NL query to SQL.ii. পু িলয়া (puruliea) – This will be selected as de- 720fault database value and corresponding row value 721from attributes table will be fetched.722iii. বাকুড়া পু িলয়া (Bakuɽa puruliea) – This will not723be selected.724iv. নিদয়ার (nothyar) – This will be selected as default725database value and corresponding row value from726attributes table will be fetched.727v. হাসপাতােলর (haspathaler) – This will be selected728as entity from enttities table and corresponding729row value will be fetched from entity</s> |
<s>table.730vi. নিদয়ার হাসপাতােলর (nothyar haspathaler) – This731will not be selected.732vii. অি (osthi) (Orthopedic) – This will not be se-733lected.734viii. িচিক�সা (chykitsa) (Treatment) – This will not be735selected.736ix. িবভাগ (bybhag) (Department) – This will be se-737lected as entity from enttities table and corre-738sponding row value will be fetched from entity739table.740x. অি িচিক�সা (osthi chykitsa) (Orthopedic Treat-741ment) – This will not be selected.742xi. িচিক�সা িবভাগ (chykitsa bybhag) (Treatment De-743partment) – This will not be selected.744xii. অি িচিক�সা িবভাগ (osthi chykitsa bybhag) (Or-745thopedic Department) – This will be selected as746default database value and corresponding row 747value from attributes table will be fetched. 748xiii. আেছ (ache) (available) – This will not be se- 749lected. 750After completion of synonyms database matching, 751corresponding row values of বাকুড়া (bãnkoora), পু - 752িলয়া (puruliea), নিদয়ার (nothyar), অি িচিক�সা িবভাগ 753(osthi chykitsa bybhag) (Orthopedic department), হা- 754সপাতােলর (haspathaler), িবভাগ (bybhag) (Department) 755from attributes and entities tables will be fetched and 756inserted into the semantic table except value of syn- 757onyms attribues from both table. The representation of 758above mention query after semantic analysis has given 759in Table 12. 7604.5. Step v 761After SQL generation the abovementioned query i.e. 762বাকুড়া, পু িলয়া ও নিদয়ার হাসপাতােলর মেধ অি িচিক�সা 763িবভাগ কাথায় কাথায় আেছ? 764(bãnkoora, puruliea o nothyar haspathaler mothe os- 765thi chykitsa bybhag kothaekothaeache?) (Where is the 766rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 1212 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainFig. 5. User request with generated response.Fig. 6. Numbers of known and unknown tokens.orthopedic department available among Bankura, Pu- 767rulia and Nadia’s Hospital?) will be converted to fol- 768lowing SQL SELECT hospital.*,department.* FROM769hospital, department WHERE department.dept_name770= ‘অি িচিক�সা িবভাগ’ (osthi chykitsa bybhag) (Ortho-771pedic department) AND hospital.hos_district IN (‘বা-772কুড়া’ (Bakuɽa), ‘পু িলয়া’ (puruliea), ‘নিদয়া’ (nothyar))773AND hospital.hos_id = department.hos_id. Conver-774sion of natural language query to SQL has been in775Fig. 4.7764.6. Step vi777Finally the SQL will be executed by the BLQPS and778the desired result will be fetched from default database.779The user request with generated response has been780given in Fig. 5.7814.7. Time complexity of the BLQPS782i. After tokenization, let p tokens be present in the783user given string. Let p = m + n. After POS784tagging, the proposed system identifies m num-785bers of known tokens known and n numbers of786unknown tokens.787ii. Let there be q numbers of pre-defined word list788present. Each list contains a numbers of words.789Time taken to search 1st token in the 1st list of 790words = a unit. 791Time taken to search 1st token in the 2nd list of 792words = a unit. 793Time taken to search 1st token in the 3rd list of 794words = a unit. 795… 796… 797… 798Time taken to search 1st token in the qth list of 799words = a unit. 800Therefore, total time taken by the 1st token= a+ 801a+ a+ . . . (q times) = qaunit. 802∴ Total time taken by p numbers of tokens to 803search in the q numbers of list of words = 804pqa unit in the POS tagging</s> |
<s>phase. 805After POS tagging known and unknown tokens 806have been given in Fig. 6. 807Above example discussed in Section 3(iii) has 808been taken in the Fig. 6. 809iii. If token taken one at a time= n number of pattern 810will be generated. 811rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 13K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 13Table 13General features based comparative study of the proposed system with similar type systemSl. No. Author(s) & name General features of other systems General features of BLQPS (proposed system)of the system1 P. Kaur et al., Conve-rsion Of Natural Langu-age Query To SQL [17]i) In this system, uttered speech is identified by Hid-den Markov Model (HMM).ii) This system uses WordNet.iii) Pre-defined grammar rules have been used in thissystem.iv) The time complexity has not been mentioned here.i) TheBLQPS uses pattern generation to identifythe word.ii) The BLQPS does not use WordNet. It usessynonym database and default database.iii) Pre-defined grammar rules are not used in thissystem.iv) The time complexity has been computed here.2 K.M.A. Hasan et al., Re-cognizing Bangla Gram-mar Using PredictiveParser [12]i) This system uses predictive parser to identifyBangla grammar.ii) The proposed system uses the pre-defined XMLdictionary for parts of speech tagging.iii) The time complexity has not been mentioned here.i) The BLQPS uses scoring and pattern genera-tion technique to identify words.ii) This system uses pre-defined string arrays forparts of speech tagging.iii) The time complexity has been computed here.3 K.N. ElSayed, An ArabicNatural Language Inter-face System for a Data-base of the Holy Qu-ran [18]i) An Arabic Natural Language Interface System fora Database of the Holy Quran parses the Arabicsentence using context free grammar rules.ii) This system contains Arabic word and their corre-sponding SQL command.iii) The time complexity has not been mentioned here.i) The BLQPS uses scoring and pattern genera-tion.ii) SQL query is dynamically generated. There isno need to save every word to its equivalentSQL command.iii) The time complexity has been computed here.4 A. Sawant et al., NaturalLanguage to Database In-terface [20]i) This system uses SQL template to identify at-tribute(s) as well as table(s).ii) The time complexity has not been mentioned here.i) The BLQPS uses some pre-defined conditionto identify attribute(s) and table(s) dynami-cally.ii) The time complexity has been computed here.5 R. Alexander, et al., Nat-ural LanguageWeb Inter-face for Database (NL-WIDB) [19]i) Natural LanguageWeb Interface for Database (NL-WIDB) performs checking operation whether thequestion string is present in Data Dictionary.ii) The NLWDB uses SQL template string to convertthe NL to SQL element.iii) The time complexity has not been mentioned here.i) This system does not contain Data Dictionary.ii) The BLQPS uses some pre-defined conditionto construct SQL dynamically.iii) The time complexity has been computed here.6 J. Kaur et al., Implemen-tation of query proces-sor using Automata andnatural language process-ing [15]i) This automata based query processing system pro-cesses interrogative statements.ii) This system contains a Data Dictionary whichstores all possible pre-defined words of a particularsystem.iii) The time complexity has not mentioned here.i) The proposed system can process interroga-tive as well as assertive statements.ii) The</s> |
<s>BLQPS does not contain such type ofData Dictionary.iii) The time complexity has been computed here.7 M.M. Anwar et al., Syn-tax Analysis and Mach-ine Translation of BanglaSentences [13]i) Syntax Analysis and Machine Translation basedsystem works on pre-defined grammar rules.ii) This system uses trained corpus to identify a word.iii) The time complexity has not been mentioned here.i) The BLQPSworks on scoring and pattern gen-eration technique.ii) The proposed system uses synonym database,default to identity word.iii) The time complexity has been computed here.8 K.Muntarina et al., TenseBased English to BanglaTranslation Using MTSystem [16]i) The Tense Based English to Bangla TranslationUs-ingMT System uses pre-defined lexicon to identifywords.ii) It uses a set of production rules to convert Englishsentence to its corresponding Bengali sentence.iii) The time complexity has not been mentioned here.i) The BLQPS uses synonym database, defaultdatabase to identify word.ii) This system uses scoring and pattern genera-tion to convert Bengali sentence to its corre-sponding SQL.iii) The time complexity has been computed here.9 A. Kataria et al., Nat-ural Language Interfacefor Databases in HindiBased on Karaka The-ory [9]i) The NLIDB on Hindi language has been developedusing Paninian Framework and Karaka theory.ii) This system uses shallow parser.iii) This Hindi language based NLIDB finds root wordfrom given word.iv) The Graph Generator determines the relationshipamong the command, table name, attribute nameand conditional part.v) The time complexity has not been mentioned here.i) This has been developed on scoring and pat-tern generation.ii) The BLQPS does not use shallow parser.iii) This system does not find root word. It findspatterns of words.iv) The proposed system determines the relation-ship among the command, table(s), attribute(s)and conditional part using few pre-definedrules.v) The time complexity has been computed here.rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 1414 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domainToken taken two at a time = n − 1 number of 812pattern will be generated. 813Token taken three at a time = n − 2 number of814pattern will be generated.815…816Token taken n at a time = 1 number of pattern817will be generated.818Total number of patterns generation without po-819sition replacement820=n+(n−1) + (n−2) + . . .+1 =n(n+1)iv. The synonym database search – There are entity821table, attribute table, default database tables.822Let there are x numbers of rows and y numbers823of columns in entity table. There are z numbers824of rows and w numbers of columns in attribute825table.826There are s numbers of rows and t numbers of827columns in default database table.828Time taken by 1st pattern to search in entity table829= xy unit time.830∴Time taken by n(n+1)2 pattern to search in entity831table = xy n(n+1)2 unit time.832Similarly, time taken by n(n+1)2 pattern to search833in attribute table = zwn(n+1)2 unit time.834Similarly, time taken by n(n+1)2 pattern to search835in default database tables = stn(n+1)2 unit time.836So the total time taken= {xy+ zw+ st}n(n+1)2837unit.838v. Time taken to create SQL = u1 unit time.839vi. Time taken to generate response = u1 unit time.840∴ Total time complexity841= f(p,m, n, a, q, x, y, z, w, s, t, u1, u2)= p+ pqa+n(n+ 1)n(n+ 1){xy +</s> |
<s>zw + st}∴ f(n) ∼= n+ n× n× n+n(n+ 1)n(n+ 1){n× n+ n× n+ n× n}= n+ n3 +n(n+ 1){1 + n2 + n2 + n2}= n+ n3 +n(n+ 1){3n2 + 1}= n+ n3 +(n2 + n){3n2 + 1}= n+ n3 +3n43n33n45n3(3n4 + 5n3 + n2 + 3n) = O(n4)5. General features based comparative study of842the proposed system with similar type system843The comparative study of proposed system with 844other similar type system is features based. Few simi- 845lar type systems have been considered for comparative 846study. Prabhudeep Kaur et al. have developed Conver- 847sion of Natural Language Query to SQL. This system 848is based on pre-defined grammar rules and WordNet 849that can able to convert speech to SQL using Hidden 850Markov Model (HMM) has been implemented in [17]. 851Another Bengali parser has been developed by Hasan 852et al. The Bengali parser works on creation of Ben- 853gali grammar from Bengali sentences. Authors have 854considered top down parsing method and avoided left 855recursion in context free grammar (CFG) as in [12]. 856Access of the Holy Quran has grown rapidly with the 857grown of huge numbers of smart mobiles, tablets and 858laptops. The system has developed by Khaled Nasser 859ElSayed to access the database of the Holy Quran. The 860primary features of this system are translation of nat- 861ural Arabic question or imperative sentences to SQL 862command and answer extraction from the Holy Quran 863database. Parsing technique and little morphological 864process have been used to make the interface of this 865system that are based on Arabic context free grammar 866rules has been described in [18]. Aarti Sawant et al. 867have described natural language to database that can 868manage natural language question as an input. Authors 869have stated that the proposed system is able to generate 870textual response from relational database using natural 871language query. The natural language interface simpli- 872fies the textual data extraction from relational database 873without having essential knowledge of SQL has been 874developed in [20]. Other similar type systems like NL- 875WIDB [19] of Alexander et al., natural language in- 876terpretation using automata [15] of Kaur et al., Bangla 877parser [13] of Anwar et al., English to Bangla Transla- 878tion UsingMTSystem [16] ofMuntarina et al. and Nat- 879ural Language Interface for Databases in Hindi Based 880on Karaka Theory [9] of Kataria et al. have been dis- 881cussed details in Table 13. 882rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 15K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain 156. Conclusion and future work 883The Bengali Language Query Processing System 884(BLQPS) is an automated system which shall be able885to handle Bengali language user queries. The user886shall submit the query in Bengali Language. Then the887BLQPS processes the query and generates response in888Bengali language. The BLQPS is designed in such a889way that naive Bengali users can interact with comput-890erized system with their own language (i.e. Bengali).891Queries containing adjectives cannot be processed by892the proposed system, like “What are the best hospitals893in Bankura” cannot be</s> |
<s>processed as best is an adjec-894tive. Hence, queries with qualitative terms defined by895adjectives cannot be processed which is a limitation of896the BLQPS.897From the time complexity analysis it is found that898time complexity is in O(n4) which works well when899the number of unknown words/tokens are few. How-900ever, as the number of unknown tokens increase, the901time complexity increases greatly which is another902drawback of this proposed system. Further comparative903analysis needs to be done with other similar type sys-904tems using time complexity, amortized analysis (pro-905cess based) so as to improve upon the time complex-906ity of the proposed system. The searching technique907needs to be improved in future work so that the search-908ing time is reduced. Alternative techniques of searching909and querying the database need to be developed which910is the scope of future work.911Acknowledgments912This researchwork has been done at Research Project913Lab under Dept. of Computer Science and Engineer-914ing of National Institute of Technology (NIT), Durga-915pur, The Authors would like to thank Dept. of Com-916puter Science and Engineering, NIT, Durgapur, India917for academically support to this research work.918References919[1] Reshamwala A, Mishra D, Pawar P. Review on natural lan-920guage processing. Engineering Science and Technology: An921International Journal 2013; 3(1): 113-116.922[2] Nihalani N,MotwaniM, Silakari S. Natural language interface923to database using semantic matching. International Journal of924Computer Applications 2011; 31(11): 29-34.925[3] Kaur S, Bali RS. SQL generation and execution from nat-926ural language processing. International Journal of Comput-927ing and Business Research 2012. Available from: http://928www.researchmanuscripts.com/isociety2012/54.pdf.929[4] Warren DHD, Pereira FCN. An efficient easily adaptable sys-930tem for interpreting natural language queries. American Jour-931nal of Computational Linguistics 1982; 8(3-4): 110-122. 932[5] Sujatha B, VishwanathaRaju S, Nagaprasad S. Efficient natu- 933ral language query interface to databases. International Jour- 934nal of Advanced Research in Computer Engineering and Tech- 935nology 2014; 3(9): 3300-3308. 936[6] Mukherjee P, Chakraborty B. A comparative analysis of 937permutation combination based and grammatical rule based 938knowledge provider system. Intelligent Decision Technolo- 939gies 2017; 11(1): 39-60. doi: 10.3233/IDT-160276. 940[7] Soumya MD, Patil BA. An interactive interface for natural 941language query processing to database using semantic gram- 942mar. International Journal of Advanced Research 2017; 3(4): 943193-198. 944[8] Borkar PS, Gahane L, Raut A, et al. Hindi language gui for 945transport system using natural language processing. Interna- 946tional Research Journal of Engineering and Technology 2017; 9474(3): 1293-1298. 948[9] Kataria A, Nath R. Natural language interface for databases in 949Hindi based on karaka theory. International Journal of Com- 950puter Applications 2015; 122(7): 39-43. 951[10] González JJ, Juarez RF, Fraire HJ, et al. Semantic representa- 952tions for knowledge modeling of a natural language interface 953to databases using ontologies. International Journal of Com- 954binatorial Optimization Problems and Informatics 2015; 6(2): 95528-42. 956[11] Mridha MF, Saha AK, Das JK. Solving semantic problem 957of phrases in NLP using universal networking language 958(UNL). International Journal of Advanced Computer Sci- 959ence and Applications 2014. Available from: https://thesai. 960org/Downloads/SpecialIssueNo9/Paper_3Solving_Semantic_ 961Problem_of_Phrases_in_NLP_Using_Universal_Networking 962_Language.pdf. 963[12] Hasan KMA, Mahmud A, Mondal A, et al. Recognizing 964Bangla grammar using predictive parser. International Journal 965of Computer Science and Information Technology 2011; 3(6): 96661-73. 967[13] Anwar MM, Anwar MZ, Bhuiyan MAA. Syntax analysis and 968machine translation</s> |
<s>of Bangla sentences. International Journal 969of Computer Science and Network Security 2009; 9(8): 317- 970326. 971[14] MahmudMR,AfrinM, RazzaqueMA, et al. A rule based Ben- 972gali stemmer. International Conference on Advances in Com- 973puting, Communications and Informatics 2014. p. 2750-2756. 974doi: 10.1109/ICACCI.2014.6968484. 975[15] Kaur J, Chauhan B, Korepal JK. Implementation of query pro- 976cessor using automata and natural language processing. Inter- 977national Journal of Scientific and Research Publications 2013; 9783(5): 1-5. 979[16] Muntarina K, MoazzamMG, BhuiyanMAA. Tense based En- 980glish to Bangla translation using MT system. International 981Journal of Engineering Science Invention 2013; 2(10): 30-38. 982[17] Kaur P, Shruthi J. Conversion of natural language query 983to SQL. International Journal of Engineering Sciences and 984Emerging Technologies 2016; 8(4): 208-212. 985[18] ElSayed KN. An Arabic natural language interface system for 986a database of the Holy Quran. International Journal of Ad- 987vanced Research in Artificial Intelligence 2015; 4(7): 9-14. 988[19] Alexander R, Rukshan P, Mahesan S. Natural language web 989interface for database (NLWIDB). Proceedings of the Third 990International Symposium 2013. Available from: https://arxiv. 991org/ftp/arxiv/papers/1308/1308.3830.pdf. 992rrected proof versionGalley Proof 12/04/2019; 14:42 File: idt–1-idt180074.tex; BOKCTP/duyan p. 1616 K.P. Mandal et al. / A novel Bengali Language Query Processing System (BLQPS) in medical domain[20] Sawant A, Lambateand P, Zore AS. Natural language to 993database interface. International Journal of Engineering Re- 994search and Technology 2014; 3(2): 1365-1368.995[21] Hamon T, Grabar N, Mougin F. Querying biomedical linked996data with natural language questions. Intelligent Decision997Technologies 2017; 8(4): 581-599. doi: 10.3233/SW-160244.998[22] Quarteroni S. Lightweight integration and natural language999querying of heterogeneous data services. Intelligent Decision1000Technologies 2012; 6(2): 149-162. doi: 10.3233/IA-120037. 1001[23] Kaljurand K, Kuhn T, Canedo L. Collaborative multilingual 1002knowledgemanagement based on controlled natural language. 1003Intelligent Decision Technologies 2015; 6(3): 241-258. doi: 100410.3233/SW-140152. 1005rrected proof version</s> |
<s>GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 AUTOMATIC QUESTION AND ANSWER GENERATION FROM BENGALI AND ENGLISH TEXTS Shudipta Sharma1, Muhammad Kamal Hossen2, Md. Sajjatul Islam3, Md. Shahnur Azad Chowdhury4, Md. Jiabul Hoque5 1,2Chittagong University of Engineering & Technology, 3Chittagong Independent University, 4International Islamic University Chittagong, 5Southern University Bangladesh, Bangladesh 1alittleprogramming@gmail.com, 2kamalcsecuet@gmail.com, 3sajjatcse99@gmail.com, 4tipu_iiuc@yahoo.com, 5jia99cse@yahoo.com Abstract The aim of this paper is to build a Question-Answering (QA) system considering Bengali and English language whose task is to generate questions along with their answers. There are two different process modules for Bengali and English. In each module, the main part is Question Generator (QG) which generates possible questions from a sentence by choosing a possible phrase as an answer-phrase. After that, the system has to perform some NLP tasks such as main verb decomposition, subject-auxiliary inversion (for English), replace answer-phrase with the question tag. Considering bivokti (িবভি�), postfix and singular-plural, we have to choose the proper question tag and their form in Bengali. The selected answer-phrase for a question is used as the answer to that question. Another task of this system is to store the generated questions and their valid answers. Finally, the questions and their answers are displayed on the screen. The performance and accuracy of the system are evaluated on different Bengali and English texts. The performance of the system on question generation is 78.33% compared to the human being and the average percentage of accuracy of the generated questions is 76%. Keywords: Question Generator, NLP, Bengali-English, Answer-Phrase, Bilingual 1. Introduction If we ask that whether any interactive computer system about Question-Answering (QA) exists, the answer will be yes. But still, it is not so much. If there is a system that one can test by auto-generating different types of effective questions from some given texts as well as the answers for those questions, then it may be obviously helpful for him/her. Moreover, if it is bilingual, it will be an effective work for students as they may have academic subjects in two different languages (for example, Bengali and English in Bangladesh). So our goal is to design a QA system for Bengali and English. Generating raw questions along with answers can be a time consuming and effortful process. We have tried to do it here. Especially, the aim of the proposed system is to generate factual questions along with the answers from the texts. The system’s heart is a Question Generator (QG) that takes plain texts and gives a set of factual questions with the answers. A user can then select and revise them to create practice exercises or part of a quiz to assess whether students read the text and retained knowledge about its topic. We have motivated using informational, non-fiction texts that have factual information rather than opinions. And of course, here our work is not too much expert to do this. That expertise belongs to the future task. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 2. Literature Review Now the world has a</s> |
<s>great eye on the field of Natural Language Processing (NLP). Someone do researches on text processing, others on text understanding, text summarizing, finding answers for given questions, and generating questions, etc. All of these tasks are a great deal with NLP. Some of the works that are done earlier on question generation are listed below. Hitomi et al. [1] concerned with a type of infant bilingualism in which children have been regularly exposed to two languages from birth as a result of each of their parents speaking in a different language. Their system produces the only negative question in both Japanese and English language and judges the user’s response. Xu et al. [2] focused on the methods for question generation and answer judging as well as the game implementation. Here they worked with Chinese and English statements. A cross-language QA system was developed by Plamondon et al. [3]. Here they developed a system that receives questions in English language and shows the answers in English for the texts in English. They also transformed the system into a bilingual system to allow French speakers to ask their questions in French and get answers in French as well but using an English document collection. Kaur et al. [4] described a system that first collects the corpus of data or paragraph from the encyclopedia to make the questions and find the exact answers. Plamondon et al. [5] developed a system where the question must be asked in English, the document collection was in English and the answer extraction was performed in English. Filho et al. [6] developed a system where they tried to classify the questions in only four types (“who”, “where”, “when” and “how many” questions). Sharma et al. [7] developed a system on automatic generation of questions from the given paragraph in Punjabi language and also the system would generate the multiple choice questions from the generated questions. Generation of multiple choice questions is very important because this helps anyone to test their knowledge in the specific field. One can give the answer easily by choosing one option from a given set of options provided by the system and then the system evaluates the given answer and generates the result for all of the given answers. Various Punjabi language-dependent rules and examples have been developed to generate the output based on the given input. The questions would be generated by the proposed system on the basis of these rules and examples. The system would use rules-based approach, pattern matching, and information extraction. The rules were made according to some keywords like “names”, “location names”, “dates”, “years”, etc. A corpus in the Punjabi language was also created which find the named entities for the names of persons, cities, places, etc. A few Bengali and English question answering system is also developed. Haque et al. [8] developed a question answering system based on transliteration and table look-up as an interface for the medical domain. The system is in no way a complete QA</s> |
<s>system; however, it gives a basis to implement a complete QA system for Bengali. The implementation was involved with the generation of questions from the medical domain only. They also considered only simple questions (‘Wh’ questions). Pakray et al. [9] developed a keyword based multilingual restricted domain question answering system with dialogue management for railway information in Bengali and Telugu. The system accepted typed text inputs and provided text output as well. Hoque et al. [10] proposed a framework for generating questions and corresponding answers considering the documents of two different languages- Bengali and English. But this can only generate simple ‘Wh’ questions. Banarjee et al. [11] demonstrated a system that ensemble of multiple models achieved satisfactory classification performance in the task of question classification. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 3. The System Model and Methods The model and the workflow of the proposed QA system are described in the following sections. We have worked on two different procedures to solve the problem for Bengali and English in a different way. It has four basic modules such as (a) Input module, (b) English module, (c) Bengali module, and (d) Output module. The architecture is shown in Figure 1. Figure 1: The System Model 3.1 Input Module It takes the texts and language option as input from the user. The language option is either Bengali or English. It’s another task is to select the mode for further processing based on language option. 3.2 English Module In this module, basically, we have followed the procedure described by Heilman et al. [12] which is explained in this section. Here this procedure is modified slightly. This module deals with the Question Generator (QG) for English that defines a two steps process for question generation: (i) NLP Transformation and (ii) Question Creation. In step (i), the text sentences are transformed GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 into simpler straightforward declarative statements. This is done by applying some syntactical and grammatical operations. It has operations for transforming complex sentences into simple sentences and resolution of pronouns. In step (ii), these generated sentences are processed to generate questions by following some operations (Wh-movement, subject-auxiliary inversion, etc.). Here we have used some NLP tools to analyze the input sentences. The Stanford Core NLP tool is used to auto sentence split, tokenize, and parse sentence resulting in Penn Treebank style. We have also used the Parts of Speech (POS) tagger which labels the words of a sentence as their POS. It also labels the proper nouns with their semantic classes (often just person, organization, and location). All of these are implemented in the Stanford Core NLP tool. 3.1.1 NLP Transformation of Input Sentences This step represents the first part of the QG. The English Grammatical rules are applied for transformation of complex sentences into simple sentences on the input text sentences if necessary. Then the pronoun replacement has been applied. So, the first task is to extract simplified statements. Sometimes sentences contain many individual</s> |
<s>parts such as “We, the students of your school, are fully responsible for this”. This is extracted to “We are fully responsible for this”. Three subtasks are followed to accomplish the goal. They are: (i) Removal of Stop Words and Relative Clauses: The words whose absences don’t make any significant change in the sentence are called the stop words. And the clause which contains the relative pronouns at the beginning is called the relative clause. We can simplify many sentences by removing these unnecessary parts. For example, from the sentence “However, they, who are the students of your school, want to do it”, we can remove the word ‘however’ and the relative clause ‘who are the students of your school’ to transform into “They want to do it”. Proc simpleSentenceExtract(tree) begin if tree.firstChild.label is PP then move tree.firstChild as tree.lastChild end if replace tree with extractDFS(tree) end Proc Proc extractDFS(tree) begin if tree.firstChild is NULL then return end if if tree.firstChild.label is WP or WP$ then delete tree.firstChild from tree else if tree.weight is equal to 1 and tree.leafValue is in stopWordList then delete tree.firstChild from tree end if extractDFS(tree.firstChild) end Proc (a) (b) Figure 2: The Extraction Algorithm (a) Primary Method, (b) Secondary Method (ii) Splitting Conjunctions: We split conjunctions between clauses and verb phrases. For example, we split the sentence, “Nepal, Bhutan and China are located near Bangladesh but do not share a border with it” as “Nepal, Bhutan and China are located near Bangladesh” and “Nepal, Bhutan and China do not share a border with it”. The extraction algorithm is given in Figure 2. There are two methods in this algorithm where the main GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 method takes a parse tree of a sentence as input and gives a set of the parse tree of simplified sentences as output. (iii) Pronoun Replacing: If the extracted simplified sentences contain any pronoun, then the generated questions may not be perfect. So we have tried to replace the pronouns with the antecedent nouns. For example, “Bangladesh is a country in South Asia. It shares land borders with India and Myanmar” contain pronoun ‘it’. So, the second sentence will be transformed into “Bangladesh shares land borders with India and Myanmar”. This is pronoun replacing. The algorithm is shown in Figure 3. Proc pronounReplace(sentenceList, tokenList) begin for each sentence in sentenceList for each token in tokenList S = {location, time, organization, etc.} if token.posTag is person or is in S then store all the continuous persons in vector pers store all continuous non-person named entity in another vector nonPers else stop inserting token into its vector. end if if token.posTag is pronoun then if token is personalPronoun then replace token with pers else replace token with nonPers end if end if end for end for end Proc Figure 3: The Pronoun Replacement Algorithm 3.1.2 Question Creation After completing the NLP transformation, we have followed the following steps: (i) Answer phrase selection and generation of question phrases</s> |
<s>for the selected answer phrase, (ii) Main verb decomposition, (iii) Subject-auxiliary inversion, and (iv) Replacement of answer phrase with question phrase and placing at the beginning of the sentence. These steps are shown in Figure 4. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 Figure 4: The Steps of Question Generation from English Sentences The proposed system selects a noun phrase (NP) or prepositional phrase (PP) as answer phrase. This step is skipped for yes/no type questions since they have no question phrase. The algorithm for this is shown in Figure 5. Proc selectAnsPhrase(tree) begin if tree.firstChild.label is NP or PP then store ansPhrase as tree.firstChild end if selectAnsPhrase(tree.firstChild) end Proc Figure 5: The Answer Phrase Selection Process If an auxiliary verb or modal is not present, the system changes the main verb into the appropriate form of do and the base form of the main verb and the algorithm is supplied in Figure 6. Now, the subject-auxiliary inversion is needed to generate grammatically correct questions. In questions, the auxiliary verb is located before the subject. So, we need to identify the subject and auxiliary verb and invert them. Now, the remaining steps are ‘Answer Phrase Removal’ and ‘Question Phrase Insertion’. In this step, we have following Table 1. The table specifies the question phrase for each selected answer phrase. In the case of yes/no type questions, this step is not needed. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 Proc decomposeMainVerb(tree) begin if tree.label is VP then if tree.hasChild.label is VBZ then aux is “does” replace tree with lexeme(tree.label) insert aux before tree else if tree.hasChild.label is VBD then aux is “did” replace tree with lexeme(tree.label) insert aux before tree else if tree.label is VB then if lexeme(tree.label) is not equal to tree.label then aux is equal to tree else aux is equal to “do” insert aux before tree end if else if tree.label is MD aux is equal to tree end if return end if decomposeMainVerb(tree.firstChild) end Proc Figure 6: The Method of Main Verb Decomposition Table 1: Mapping of Answer Phrase to Question Phrase WH Word Conditions Examples Who Person or personal pronoun (I, he, herself, them, etc.) Abdul Quaium, he, etc. What Object (not person or time) Mountain, book, etc. Where Location proceded by the preposition (on, in, etc.) in Bangladesh When Time, month, year, day or date Wednesday How many Cardinal (CD) or quantifier (QP) phrase 5 taka Whose Noun with a possessive (’s or ’) Rahim’s book 3.3 Bengali Module Like English module, there is also a module of Question Generator (QG) for Bengali. We have generated Bengali questions in a different way from English. But still, some parts are same as before. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 After preprocessing (sentence splitting, tokenization) of the text, we have to do NLP transformation (simple sentence extraction as before) and question generation from person, name entity, time, etc. But it is not needed to decompose main verb and subject-auxiliary inversion. We just need</s> |
<s>to replace the answer phrase with the proper question phrase. The step by step process of generating questions from Bengali text is illustrated in Figure 7 with an example. Figure 7: Steps of Question Generation from Bengali Sentences 3.3.1 Simple Sentence Extraction First, the complex and compound sentences are transformed into simple sentences. But there are many rules to do it. Here only a few rules have been used. The rests are kept for future development. For example, there is a complex sentence “যিদ েকান এলাকা �ািবত হেয় �িত হয়, তেব বনযা হেয়েছ ধরা হয়”. Now we have to remove the subordinate clause marker and the finite verb should be changed as “VR (verb root) + েল”. The remaining is the same as before. Another example is “পশপািখর ্নবন িবন� হয় এবং সিদ �ংস হয়”. Since this is a compound sentence, it is needed to remove the conjunction and store as different sentences. 3.3.2 Named Entity, Number, Time Recognition After splitting into tokens, we choose different bases for question (person, number, time, etc.). To select token or a series of tokens as persons (নাম), it is needed to do some tasks: (i) While searching in the token list of a sentence, all the continuous persons are stored in a vector named vecP. (ii) If a token is found that is not a person and not a ‘,’ or ‘এবং’, then we stop inserting that token into vecP. At this stage, we check whether the last inserted person has a ‘Biv’ (িবভি�) or not. If no, then the question word is “িক” depending on from starting of the sentence whether we get the “নাম”. If the “নাম” is not found, then the question word is “েক”. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 (iii) If the vecP.size() is greater than one, then the question word should be েক েক or িক িক as shown in Table 2. In case of number and time detection, we have followed the same procedure. (i) In detection of number, we check a Regular Expression (0-9)+.(.).(0-9)+.(-) 0-9)+.(.).(0-9)+ for any number (e.g., 45, 375.95) or a range of number (e.g., 45-50, 6.4-3.4). (ii) In detection of time, we check a Regular Expression (0-9)+.(.|/|-).(0-9)+.(.|/|-) 0-9)+ for general date format in Bengali. (iii) Then the question word is taken according to Table 2. Table 2. Question Words for Different Tags Tag Question Word Case Person েক vecP.size() = 0 and ‘Biv’ is null and not found ‘নাম’ Person িক vecP.size() = 0 and ‘Biv’ is null and found 'নাম’ Person কা + Biv Biv is not null Person েক েক vecP.size() > 1 and ‘Biv’ is null and not found ‘নাম’ Person িক িক vecP.size() > 1 and ‘Biv’ is null and found ‘নাম’ Time কখন Number কত Location েকাথায় 3.4 Output Module This module just presents the generated questions along with their answers in a GUI. There is nothing else to do here. 4. Experimental Result In this section, the performance of the proposed system is checked</s> |
<s>using some Bengali and English texts as input. A list of some of these texts is given in Table 3. Table 3. Input Texts in Bengali and English Title of the Text Contents Language Cricket Team ২০১৫ সাল বাংলােদশ ি�েকেটর �ণর যুগ িছল। িক� ২০১৭ সােল ছ� হািরেয় েফেল। মাশরািফ িবন মুতর ্ার েনতৃে� বাংলােদেশর েপস েবািলং আ�মেণ নতুন যুগ শ হেয়েছ গত দইু বছের। এেত অনযতম ভরসা হেলন েমা�ািফ্রু রহমান ও তাসিকন আহেমদ। িক� চযািিয়নস �িফ েথেক দি�ণ আি�কা তােদরেক হতাশ কেরেছন। েমা�ািফ্ সফর অসমা� েরেখই েদেশ িফেরেছন। তাসিকন পানিন বলার মেতা সাফলয। েমা�ািফ্-তাসিকনেদর ছ� হািরেয় েফলা ভাবাে� িবিসিব সভাপিত না্মুল হাসানেক। শ�বার গলশােন িন্ বাসভবেন সংবাদমাধযেমর সামেন না্মুল বাংলােদেশর েপস েবালারেদর পারফরমযাা বেলন। েবল-মাশরািফরা িক েবািলং করেছ। সমসযা হে� দুই্ন েবালারেক িনেয়। গত চার বছের েমা�ািফ্-তাসিকনরা আমােদর গ�প�ণর ে�ক� এেন িদেয়েছ। েমা�ািফে্র হাঁটুেত ২ বার অে�াপচার হয়। Bengali GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 Bangladesh Bangladesh is a country in South Asia. It shares land borders with India and Myanmar. Nepal, Bhutan and China are located near Bangladesh but do not share a border with it. The country's maritime territory in the Bay of Bengal is roughly equal to the size of its land area. Bangladesh is the world's eighth most populous country. Dhaka is its capital and largest city. Chittagong has the country's largest port. English Flood বাংলােদেশর �াকৃিতক দুেযরােগর মেধয বনযা অনযতম। যিদ েকান এলাকা �ািবত হেয় �িত হয়, তেব বনযা হেয়েছ ধরা হয়। বাংলােদশ একি নদনমাতৃক ও বৃি�বহল েদশ। এখােন বািষরক বৃি�পােতর পিরমাণ ২৩০০ িমিলিমটার। ৫৭ি আআ র্ ািতক নদনসহ ৭০০ি নদন এ েদেশ ্ােলর মেতা িব�ার কের আেছ। এর মেধয ৫৪ি নদনর উউসসল ভারেত অবিসত। বাংলােদেশ বনযার �য়�িত বযাপক। বনযায় ফসেলর �িত হয়। মানুেষর মৃতুয এবং ্নবনযা�া বযাহত হয়। পশপািখর ্নবন িবন� হয়। �ংস হয় সিদ। ২০০০ সােলর বনযায় েদেশর ১৬ি ে্লার ১৮৪০০ েহ�র ্িমর ফসল ন� হয়। উউপাদন িহেসেব এ �িতর পিরমাণ ৫২৮০০০ েমি�ক টন। Bengali Nawab Sirajuddoula িসরা্-উদ-েদৗলার ্� ১৭৩৩ সােল। নবাব িসরা্-উদ-েদৗলা িছেলন বাংলার নবাব আলনবদ� খান-এর নািত। আলনবদ� খােনর েকান পু� িছল না। তাঁর িছল িতন কনযা। িতন কনযােকই িতিন িনে্র বড়ভাই হাি্ আহমদ-এর িতন পুে�র সােথ িবেয় েদন। আেমনা েবগেমর দুই পু� ও এক কনযা িছল। পু�রা হেলন িসরা্-উদ-েদৗলা এবং িম র্ া েমেহদন। আলনবদ� খান যখন পাটনার শাসনভার লাভ কেরন, তখন িসরা্-উদ-েদৗলা-এর ্� হয়। িতিন িসরাে্র ্�েক েসৗভােগযর ল�ণ িহেসেব িবেবচনা কের আনে�র আিতশেযয নব্াতকেক েপাষযপু� িহেসেব �হণ কেরন। িসরা্ তার নানার কােছ িছল খুবই আদেরর। িতিন মাতামেহর ে�হ-ভােলাবাসায় বড় হেত থােকন। িসরা্-উদ-েদৗলা ১৭৩৩ সােল ্��হণ কেরন। মনর্াফর তার েকান আত্মনেয়র মােঝ পেড়ন না। কা্ন ইসা তার চাচা হন। Bengali The United Nations is an intergovernmental organization tasked to promote international cooperation and to create and maintain international order. A replacement for the ineffective League of Nations, the organization was established on 24 October 1945 after World War II with the aim of preventing another such conflict. At its founding, the UN had 51 member states; there are now 193. The headquarters of the UN is in Manhattan, New York City, and is subject to extraterritoriality.</s> |
<s>The organization is financed by assessed and voluntary contributions from its member states. English 4.1 Question Generation Task The number of generated questions by human and the proposed system for the texts listed in Table 3 is provided in Table 4. On the basis of the number of generated questions, we have evaluated the performance of the proposed system. A comparison chart between human and the proposed system on question generation is also shown in Figure 8. Table 4. Number of Generated Questions by Human and the QA System Title of the Text No. of Questions Generated by Human No. of Questions Generated by the Proposed System Cricket Team 28 20 Bangladesh 18 26 Flood 24 15 Nawab Sirajuddoula 28 17 UN 22 16 GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 Figure 8: A Comparison Chart between Human and the QA System on Question Generation The average number of questions generated by human is (28 + 18 + 24 + 28 + 22) / 5 = 24 and the proposed QA System is (20 + 26 + 15 + 17 + 16) / 5 = 18.8. So, the performance of the QA system on question generation is (18.8 / 24) * 100% = 78.33% compared to the human. 4.2 Correctness Analysis of Generated Questions The generated questions are not always correct (grammatically or syntactically). Here we have analyzed the system on 10 randomly selected questions generated from the texts listed in Table 3. The correctness analysis of the selected questions is given in Table 5, Table 6, Table 7, Table 8, and Table 9. Table 5. Correctness Analysis of the Generated Questions from the ‘Cricket Team’ Texts Questions Generated by the QA System Correctness কত সাল বাংলােদশ ি�েকেটর �ণর যুগ িছল? Ok িক� কত সােল ছ� হািরেয় েফেল? No মাশরািফ িবন মুতর ্ার েনতৃে� বাংলােদেশর েপস েবািলং আ�মেণ নতুন যুগ শ হেয়েছ গত কত বছের? সমসযা হে� কত্ন েবালারেক িনেয়? Ok কার েনতৃে� বাংলােদেশর েপস েবািলং আ�মেণ নতুন যুগ শ হেয়েছ গত দইু বছের? এেত অনযতম ভরসা হেলন েক েক? No কােদর ছ� হািরেয় েফলা ভাবাে� িবিসিব সভাপিত না্মুল হাসানেক? Ok কখন গলশােন িন্ বাসভবেন সংবাদমাধযেমর সামেন না্মুল বাংলােদেশর েপস েবালারেদর পারফরমযাা বেলন? েমা�ািফে্র হাঁটুেত কত বার অে�াপচার হয়? Ok GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 েমা�ািফ্-তাসিকনেদর ছ� হািরেয় েফলা ভাবাে� কােক? Ok Table 6. Correctness Analysis of the Generated Questions from the ‘Bangladesh’ Texts Questions Generated by the QA System Correctness What shares land borders with India and Myanmar? No What does Bangladesh share with India and Myanmar? Ok What is roughly equal to the size of its land area? Ok What do Nepal, Bhutan and China not share with it? Ok What is the country's maritime territory in the Bay of Bengal roughly equal to? Ok Are Nepal, Bhutan and China located near Bangladesh? Ok What is Bangladesh? Ok What does Chittagong have? Ok Is Dhaka its capital and largest city? Ok What is the world's eighth most populous country? No Table 7. Correctness Analysis</s> |
<s>of the Generated Questions from the ‘Flood’ Texts Questions Generated by the QA System Correctness কত সােলর বনযায় েদেশর ১৬ি ে্লার ১৮৪০০ েহ�র ্িমর ফসল ন� হয়? Ok এখােন বািষরক বৃি�পােতর পিরমাণ কত িমিলিমটার? No বাংলােদশ িক? Ok কতি আআ র্ ািতক নদনসহ ৭০০ি নদন এ েদেশ ্ােলর মেতা িব�ার কের আেছ? No কতি নদনর উউসসল ভারেত অবিসত? Ok িকেসর �াকৃিতক দেুযরােগর মেধয বনযা অনযতম? No ২০০০ সােলর বনযায় েদেশর কতি ে্লার ১৮৪০০ েহ�র ্িমর ফসল ন� হয়? Ok েকাথায় বনযার �য়�িত বযাপক? Ok ২০০০ সােলর বনযায় েদেশর ১৬ি ে্লার কত ্িমর ফসল ন� হয়? Ok েকাথায় ফসেলর �িত হয়? No GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 Table 8. Correctness Analysis of the Generated Questions from the ‘Nawab Sirajuddoula’ Texts Questions Generated by the QA System Correctness িসরা্-উদ-েদৗলার ্� কত সােল? Ok িসরা্-উদ-েদৗলা কত সােল ্��হণ কেরন? Ok কার ্� ১৭৩৩ সােল? Ok েক িছেলন বাংলার নবাব আলনবদ� খান-এর নািত? Ok নবাব িসরা্-উদ-েদৗলা িছেলন বাংলার কার নািত? No কার েকান পু� িছল না? Ok আলনবদ� খােনর িছল কত কনযা? No িতন কনযােকই িতিন িনে্র কার িতন পুে�র সােথ িবেয় েদন? No কার িছল িতন কনযা? Ok কার দইু পু� ও এক কনযা িছল? Ok Table 9. Correctness Analysis of the Generated Questions from the ‘UN’ Texts Questions Generated by the QA System Correctness Where is the headquarters of the UN? Ok What was the ineffective League of Nations? Ok What is in Manhattan? Ok What was established on 24 October 1945 after World War II in order to prevent another such conflict? Ok What was a replacement for the ineffective League of Nations established on 24 October 1945 after? No When was a replacement for the ineffective League of Nations established on 24 after World War II in order to prevent another such conflict? Ok What is the United Nations? Ok Did the UN have 51 member states at its founding? Ok How many member states did the UN have a replacement for the ineffective League of Nations, the organization's founding? Ok Is the organization financed by assessed and voluntary contributions from its member states? Ok GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 The percentage of accuracy of the generated questions from the texts listed in Table 3 is 80%, 80%, 60%, 70%, and 90% respectively. These accuracy values are also plotted in Figure 9 to get the accuracy graph of the QA system. Figure 9. The Graph of the Percentage of Accuracy 5. Conclusion There are many tasks on QA system and most of them are of monolingual. The bilingual or multilingual QA systems also exist. But the bilingual QA systems on Bengali and English are very few. Here we proposed a basic QA system for Bengali and English. It can generate questions along with their answers from both Bengali and English texts. It can also generate two types of questions such as true-false and short question. It is capable to generate questions which are more accurate both in grammatically and syntactically than the generated questions</s> |
<s>by the existing QA systems. The performance and accuracy of the system are evaluated by inputting different texts both in Bengali and English. We are able to increase the performance of the system up to 78.33% and the percentage of accuracy of the generated questions is 76% on average. The performance and accuracy of the system may be improved if the obvious shortage of Bengali resources and tools may be overcome. References [1] Hitomi, N. A Japanese-English bilingual child’s system of answering negative questions. Japan Journal of Multilingualism and Multiculturalism, 1995, 1, 1, 28-37. [2] Xu, Y., Goldie, A. & Seneff, S. Automatic question generation and answer judging: a Q&A game for language learning. In Proc. of SIGSLaTE, Warwickshire, 2009. [3] Plamondon, L., & Foster, G. Quantum, a French/English cross-language question answering system. In Comparative Evaluation of Multilingual Information Access Systems: Workshop of the Cross-Language Evaluation Forum, CLEF. Berlin, 2003, 549-558. [4] Kaur, H. & Rimpi. A review on novel scoring system for identify accurate answers for factoid questions. International Journal of Science and Research, 2013, 2, 9, 154-157. [5] Plamondon, L., Lapalme, G., & Kosseim, L. The QUANTUM question answering system at TREC-11. In Proc. of 11th Text Retrieval Conference, Gaithersburg, 2002. [6] Filho, P. P. B., Uzeda, V. R. D., Pardo, T. A. S., & Nunes, M. D. G. V. Using a text summarization system for monolingual question answering. In Núcleo Interinstitucional de Lingüística Computacional (NILC), University of São Paulo, Brasil, 2006. GESJ: Computer Science and Telecommunications 2018|No.2(54) ISSN 1512-1232 [7] Sharma, N. & Abhilasha, E. R. Automatic question generation from Punjabi text with MCQ based on hybrid approach. International Journal of Computer Engineering & Application, 2015, 7-19. [8] Haque, N. & Rosner, M. A prototype framework for a Bangla question answering system using translation based on transliteration and table look-up as an interface for the medical domain. M.S. Thesis, Computer Science Engineering & Artificial Intelligence, Univ. of Malta, Malta, 2010. [9] Pakray, P. Multilingual restricted domain QA system with dialogue management (Bengali and Telugu as a case study). M.S. Thesis, Computer Science Engineering, Jadavpur Univ., Kolkata, 2007. [10] Hoque, S., Arefin, M. S. & Hoque, M. M. BQAS: A bilingual question answering system. In Proc. of 2nd International Conference on Electrical, Information and Communication Technologies (EICT), Khulna, 2015, 586-591. [11] Banerjee, S. & Bandyopadhyay, S. Ensemble approach for fine-grained question classification in Bengali. In Proc. of 27th Pacific Asia Conference on Language, Information and Computation, Kolkata, 2013, 75-84. [12] Heilman, M. Automatic factual question generation from text. PhD Thesis, Language Technologies Institute, Carnegie Mellon University, Pittsburgh, 2011. __________________________ Articles received: 2018-04-18</s> |
<s>International Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019Bengali Question Answering System for FactoidQuestions: A statistical approachSourav SarkerComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshsourav39@student.sust.eduSyeda Tamanna Alam MonishaComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshalammonisha@gmail.comMd Mahadi Hasan NahidComputer Science and EngineeringShahjalal University ofScience and TechnologySylhet-3114, Bangladeshnahid-cse@sust.eduAbstract—Question answering system in recent days isone of the most trending and interesting topics of researchin computational linguistics. Bengali being among the mostspoken languages in the world has yet faced difficulties incomputational linguistics. This paper demonstrates an attemptto develop a closed domain factoid question answering systemfor Bengali language. Our proposed system combining multiplesources for answer extraction extracts the answer having theaccuracy 66.2% and 56.8% with and without mentioning theobject name respectively. The system also hits around 72%documents from which the answer can be extracted. Besides thesub-parts of our system, the question and document classifierprovides 90.6% and 75.3% accuracy respectively over fivecoarse-grained categories.Index Terms—Question Answering (QA) System, BengaliQuestion Answering System, Factoid QA SystemI. INTRODUCTIONAn automated question answering system is a programthat is able to converse with the user in natural language insuch a way that no one is able to differentiate it from a realhuman being. In today’s time question answering systemis one of the hot topics in Natural Language Processing(NLP) research. It can be either closed-domain based oropen-domain based. Closed-domain question answeringdeals with questions under a specific domain whereasopen-domain question answering deals with questions aboutnearly anything based on world knowledge. In general aquestion is of two types: factoid question which is satisfiedby a short text and descriptive or complex question whichneeds to be answered in more than one line.Being one of the hot topics the works being doneon question answering system is increasing day by day.In English a lot of works have been done on questionanswering system and also there exists a number of workingquestion answering systems. But although Bengali beingone of the most widely spoken languages the works donefor Bengali language compared to English is yet very low.We worked towards developing a question answeringsystem for only factoid questions for Bengali languageon a closed domain. Initially we have chosen ShahjalalUniversity of Science & Technology (SUST) as our domainbecause every year during the admission test of ShahjalalUniversity of Science & Technology (SUST) the candidateshave a lot of queries related to SUST. In general thereare different social media groups which are formed due tohelping the candidates with information, updates, queriesetc. Candidates also ask questions in the official website ofadmission test. So we wanted to create a common platformfor the candidates where they can get answers to the queries.Thus we wanted to build a Bengali question answeringsystem which can reply to the queries instantly.II. RELATED WORKSA lot of researches have been done and are also ongoingon question answering system in different languages. Besidesthere are also works on question classification, questionfeatures, question taxonomies and answer extraction whichare the sub-parts of question answering system. There havebeen number of question answering systems developedsince the 1960s. Among the earlier question answeringsystems some of them are domain restricted and</s> |
<s>some aregeneralized.AnswerBus is an open-domain question answeringsystem where the information related to the answer isretrieved from the web in sentence level. The authors usedfive search engines (Google, Yahoo, Altavista, WiseNutand Yahoo News) to extract the web documents whichcontain the answers to the questions of the users. Thecurrent rate of correct answers to TREC-8’s (Text REtrievalConference-8) 200 questions is 70.5% [1]. JAVELIN is978-1-7281-5242-4/19©2019 IEEEanother open-domain based question answering system [2].The team suggested three QA runs JAVELIN I, JAVELINII [3] and JAVELIN III [4] among which JAVELIN III canbe used for cross-lingual task. Some of the earlier domainrestricted QA systems are BASEBALL and LUNAR [5][6].A system was described by author’s Abney et. al., 2000that handles arbitrary questions by producing a candidatelist of answers ranked by their plausibility [7]. The systemwas evaluated on the TREC question-answering track whichshowed that the correct answer to queries appeared in thetop five answers 46% of the time with a mean score of0.356.Nowadays there is a remarkable increase in the researchfor Bengali language in the field of Natural LanguageProcessing (NLP). Question answering system being one ofthe hot topics the research works for question answeringsystem for Bengali language is also increasing at this time.Author’s Banerjee et al., 2014 made the first attempton building a factoid question answering system forBengali language where the answer is processed by helpof named entities [8]. They also discussed the challengesfaced to develop the system for Bengali language. Aquestion answering system was developed for Bengali usinganaphora-cataphora resolution [9]. Authors experimentedthe system for both Bengali and English language and theyused semantic and syntactical analysis. The model reducesthe complexity of using noun instead of pronoun for therequested answer with respect to the given question queriesfor Bengali and provides 60% accuracy.III. CORPUS CONSTRUCTION & ANALYSISDataset is an important issue towards the development ofa question answering system. To build a question answeringsystem we need two types of data. One is the knowledgebase and the other is question database. Then the dataare prepared individually for the training of questionclassification and document categorization.As we have worked on a closed domain based Bengalifactoid question answering system on the domain ShahjalalUniversity of Science & Technology (SUST) we couldnot get any prepared dataset for our work. So we had toprepare our own questions and knowledge base. This datacollection task was one of the challenging tasks as we hadno resource available. We used different sources for our datacollection part. We had to collect data both for questionsand documents in different ways.A. Question DatabaseVarious sources were used for building the questiondataset such as crowd sourcing, social media, manualgeneration etc. But among them, we collected most of thedata from crowd sourcing. We collected questions from thestudents of Shahjalal University of Science & Technology(SUST) as well as from the official website of SUST wherewe got the frequently asked questions by the candidates. Wealso prepared questions from different documents, articlesand news on Shahjalal University of Science & Technology(SUST).B. Knowledge BaseKnowledge base is the set of documents from wherethe answers of the questions are to be searched andextracted. We have</s> |
<s>built our knowledge base based onthe website which carries the information solely aboutShahjalal University of Science & Technology (SUST) likewww.sust.edu, en.wikipedia.org/wiki/Shahjalal_University_of_Science_and_Technology etc. andnews from different web portals. We also collected some ofthe paragraphs about SUST by crowd-sourcing.TABLE ISources of Question Database and Knowledge BaseData Type Collection Type Source Amount of DataQuestionCrowd Sourcing 2nd Year students,Dept of CSE,SUST11300Social MediaData CrawlingFacebook Group:SUST AdmissionAid1055Manual Generation Authors 3000DocumentCrowd Sourcing 2nd Year students,Dept of CSE,SUSTSUST based websites www.sust.edu,Wikipedia sust100News Portals Bdnews24.com,eprothom-alo.com,www.sustnews24.comthedailystar.com etc.IV. ANSWER TYPE TAXONOMYAs we have worked for a closed domain question answer-ing system on the domain Shahjalal University of Science &Technology (SUST) we defined five coarse-grained classesrelated to SUST for question classification and documentcategorization. The questions and documents are classifiedin these five categories. Table II shows the category details.TABLE IIQuestion and Document CategoriesClass Name Short form DescriptionAdministration ADS Questions that require administrative inform-ation as answer and documents of administr-ative type are of ADS classAdmission ADM Questions that require admission related inf-ormation as answer and documents of admi-ssion type are of ADM classAcademic ACD Questions that require academic informationas answer and documents of academic typeare of ACD classCampus CAM Questions that require campus related infor-mation as answer and documents related tocampus are of CAM classMiscellaneous MISC Questions that require any information otherthan the above four types and documentsother than the four types are of MISC classwww.sust.eduen.wikipedia.org/wiki/Shahjalal_University_of_Science_and_Technologyen.wikipedia.org/wiki/Shahjalal_University_of_Science_and_TechnologyTABLE IIISample Tagged QuestionQuestion Labelসাে েমাট কতিট িসট আেছ ADMসাে িতবছর কতিট কের ারিশপ েদয় ACDসা মােন িক MISCসরকারী বৃি া েদর িক ভিত িফ িদেত হয় ADSসা ক া ােস িক খাবােরর দাম সহনীয় CAMTABLE IVSample Tagged DocumentDocument Labelমানবতার জ েশেখা োগানেক িতপাদ কের ২০১২ সােলর ১১ জানুয়ারী িকছুসেচতন িশ ক-িশ াথীর হাত ধের শু হয় শাহজালাল িব ান ও যুিিব িবদ ালেয়র একমা কৃিত ও পিরেবশ িবষয়ক সংগঠন ি ন এ ে ারেসাসাইিটর পথচলা। িব িবদ ালেয়র িশ াথীেদর পাশাপািশ তৃনমূল পযােয়পিরেবশ সেচতনতা বৃি েত কাজ কের যাে সংগঠনিট।CAMFig. 1. Percentage of questions in each categoryFig. 2. Percentage of documents in each categoryV. METHODOLOGYThe architecture that we have proposed for our systemis shown in figure 3. As shown in the figure we haveused three sources for answer extraction: mapped question,collection of documents and internet resource.Fig. 3. Proposed architecture for our systemAs depicted in the figure for extracting an answer atfirst a list of stem words is created from the question. Wehave used a Bengali rule based stemmer for the process. Atfirst the answer is looked for in the frequently asked section.Here we have mapped the most frequently asked questions tothe answers. If the answer is found here considering commontag words (শািব, শািব িব, শাহজালাল িব িবদ ালয়, শাহজালাল িব ান ও যুিিব িবদ ালয়, সা ) then an answer will be provided otherwise theanswer will be searched in the categorized documents. Inthis case the asked question is first classified to the expectedanswer type and then searched in the documents of thesame type and if the answer is not found in the categorizeddocuments then it searches in the internet sources which arepredefined for the particular domain and provides</s> |
<s>an answer.We wanted to build a question answering system forBengali language on a closed domain and not having enoughresources to build it, we had to work from the scratch. Wehave divided our methodology to answer a factoid questionin four basic parts:1) Data Preparation2) Question Classification3) Document Categorization4) Answer ProcessingA. Data ProcessingAs we collected data from different sources we needed toprepare and clean the dataset. The following tasks were doneto prepare the corpus for further processing.1) Removed stop words (অব , অেনক, অেনেক, অেনেকই, অ ত, ভােব,মেধ etc.)2) Removed sign characters3) Tokenized the words in special case4) Checked and made correction of spelling mistakes ofraw data manually5) Rechecked the labels that were assigned manuallyB. Question ClassificationQuestion classification is an important first step towardsbuilding a question answering system [10]. The classifi-cation of a question to expected answer type reduces thesearch space by a considerable amount [11]. We haveused four machine learning algorithms Stochastic Gradi-ent Descent(SGD), Decision Tree(DT), Support Vector Ma-chine(SVM) and Naive Bayes(NB) for our question classifi-cation phase [12]. For feature extraction similar words andboth bi-gram and tri-gram were used where tri-gram providesbetter results. Again, we have tested dynamic word clusteringmodel as deep learning methods are getting popularity inthese days [13].C. Document CategorizationThe documents that we have collected were also classifiedto predefined categories. For document categorization wehave used a different approach other than question classi-fication. As document classification follows passage classi-fication we used word embedding here. Word embedding isone kind of learned representation for text where words thathave the same meaning have a nearly same representation.We have used fastText as our embedding technique andimplemented convolutional neural network(CNN) classifier.D. Answer ProcessingThe most important and challenging part of our questionanswering system is the answer extraction method. Asshown in figure 3 we have used three sources for answerextraction process. The following techniques have beenused for our answer extraction phase.1) Vector Space Model (VSM): In vector space modeldocuments and queries are represented as vectors of featuresrepresenting the terms that occur within the collection. Adocument dj and a question q can be represented by thefollowing vectors.d⃗j = (w1,j , w2,j , w3,j , w4,j , . . . , wn,j)q⃗ = (w1,q, w2,q, w3,q, w4,q . . . , wn,q)Here the number of dimensions in the vector is the totalnumber of terms in the whole collection. The match foundby the question from a particular document is measured bythe following similarity function:sim(q⃗, d⃗j) =i=i wi,q ∗ wi,j√∑Ni=1 wi,q ∗√∑Ni=1 wi,jAfter calculating this score an answer is generated usingthe sentence having the highest rank.2) Comparison of VSM and Edit distance: We haveused a comparison between edit distance and VSMand received the maximum value between edit distanceand VSM matching. Comparing both, the answer withthe highest score is provided and if multiple answershave the same score then the one with minimum lengthis selected as factoid questions require single facts as answer.E. Performance MetricWe have used the accuracy measure to evaluate ourexperiments. Accuracy is the number of correct predictionsmade by a model over all kinds of predictions made whichis measured by the</s> |
<s>formula:Accuracy =TP + TNP +Nwhere,P = total number of positive samplesN = total number of negative samplesTP (True Positive) = number of cases when the actualclass of the sample is true and also predicted as trueTN (True Negative) = number of cases when the actualclass of the sample is false and also predicted as falseVI. EXPERIMENTS & RESULT ANALYSISFor the training purpose of question classification anddocument categorization we had to manually tag the ques-tions and documents. In both cases we have used 75% ofthe dataset for training purpose and 25% for testing. Forquestion classification Support Vector Machine(SVM) withlinear kernel provides the best accuracy 90.6% among classi-fiers Stochastic Gradient Descent(SGD), Decision Tree(DT),Support Vector Machine(SVM) and Naive Bayes(NB) thatwe used [12].Fig. 4. Best performance for different classifier for question classificationFor document categorization we have implementedconvolutional neural network(CNN) using fastText skip-gram word embedding technique. Accuracy of 75.3%was obtained for document categorization over the fivecategories that we defined.Following our answer extraction technique the proposedsystem provides around 56.8% accuracy without mentioningthe object name like শািব, শািব িব, শাহজালাল িব িবদ ালয়, শাহজালাল িব ানও যুি িব িবদ ালয়, সা and around 66.2% with mentioning theobject name. We have received a document hit of around72% for our system.TABLE VSample answers to asked questionsQuestion Extracted Answerসাে কয়িট অনুষদ রেয়েছ ৭ িটসাে র বতমান িভিসর নাম িক অধ াপক ফিরদ উি ন আহেমদেছেলেদর হল কয়িট ৩ িটTABLE VIPerformance of overall systemType Score ConditionDocument Hits 72% Considering object nameAnswer accuracy 66.2% Considering object nameAnswer accuracy 56.8% Without mentioning object nameVII. CONCLUSIONIn our whole time of working with Bengali questionanswering system we have tried to build a generic factoidquestion answering system that is if we just provide thesystem with the knowledge base and question database, itwill be able to extract answers from them. We have built acorpus consisting of 15355 questions and 220 documents onour domain Shahjalal University of Science & Technologyfor our system. The dataset or corpus creates the mainhazard for building a question answering system or anylanguage tool. No well-established Bengali language toolhas been released till date. So for the better performance ofa question answering system natural language processingtools like named entity recognizer, parts of speech tagger,stemmer etc. and also the corpus which is base of thequestion answering system need to be developed.So far we have worked with factoid questions only. Wehave future plans to forward the process by taking complexand descriptive questions into consideration. We also wantto implement more techniques to enhance the performanceof the system.References[1] Z. Zheng, “Answerbus question answering system,” in Proceedings ofthe second international conference on Human Language TechnologyResearch. Morgan Kaufmann Publishers Inc., 2002, pp. 399–404.[2] E. Nyberg, T. Mitamura, J. G. Carbonell, J. P. Callan, and K. Collins-Thompson, “The javelin question-answering system at trec 2002,”2002.[3] E. Nyberg, R. E. Frederking, T. Mitamura, M. W. Bilotti, K. Hannan,L. Hiyakumoto, J. Ko, F. Lin, L. V. Lita, V. Pedro et al., “Javelin iand ii systems at trec 2005.” in TREC, vol. 2, no. 1, 2005, p. 1.[4] T. Mitamura, F. Lin, H. Shima, M. Wang,</s> |
<s>J. Ko, J. Betteridge, M. W.Bilotti, A. H. Schlaikjer, and E. Nyberg, “Javelin iii: Cross-lingualquestion answering from japanese and chinese documents.” in NTCIR,2007.[5] B. F. Green Jr, A. K. Wolf, C. Chomsky, and K. Laughery, “Baseball:an automatic question-answerer,” in Papers presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference. ACM,1961, pp. 219–224.[6] W. A. Woods, “Progress in natural language understanding: an applica-tion to lunar geology,” in Proceedings of the June 4-8, 1973, nationalcomputer conference and exposition. ACM, 1973, pp. 441–450.[7] S. Abney, M. Collins, and A. Singhal, “Answer extraction,” inProceedings of the sixth conference on Applied natural languageprocessing. Association for Computational Linguistics, 2000, pp.296–301.[8] S. Banerjee, S. K. Naskar, and S. Bandyopadhyay, “Bfqa: A bengalifactoid question answering system,” in International Conference onText, Speech, and Dialogue. Springer, 2014, pp. 217–224.[9] S. Khan, K. T. Kubra, and M. M. H. Nahid, “Improving answer ex-traction for bangali q/a system using anaphora-cataphora resolution,”in 2018 International Conference on Innovation in Engineering andTechnology (ICIET). IEEE, 2018, pp. 1–6.[10] S. Banerjee and S. Bandyopadhyay, “Bengali question classification:Towards developing qa system,” in Proceedings of the 3rd Workshopon South and Southeast Asian Natural Language Processing, 2012,pp. 25–40.[11] E. Haihong, Y. Hu, M. Song, Z. Ou, and X. Wang, “Research andimplementation of question classification model in q&a system,” inInternational Conference on Algorithms and Architectures for ParallelProcessing. Springer, 2017, pp. 372–384.[12] S. T. A. Monisha, S. Sarker, and M. M. H. Nahid, “Classificationof bengali questions towards a factoid question answering system,”in 2019 1st International Conference on Advances in Science, Engi-neering and Robotics Technology (ICASERT 2019). IEEE, 2019, pp.660–664.[13] Z. S. Ritu, N. Nowshin, M. M. H. Nahid, and S. Ismail, “Performanceanalysis of different word embedding models on bangla language,”in 2018 International Conference on Bangla Speech and LanguageProcessing (ICBSLP). IEEE, 2018, pp. 1–5.</s> |
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/340351877IMPLEMENTATION OF AN AUTOMATIC QUESTION ANSWERING SYSTEM USINGMACHINE LEARNING SAKIF AHMED ABIRPreprint · September 2019DOI: 10.13140/RG.2.2.13210.39368CITATIONSREADS1931 author:Sakif Ahmed AbirUniversity of Liberal Arts Bangladesh (ULAB)1 PUBLICATION 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Sakif Ahmed Abir on 01 April 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/340351877_IMPLEMENTATION_OF_AN_AUTOMATIC_QUESTION_ANSWERING_SYSTEM_USING_MACHINE_LEARNING_SAKIF_AHMED_ABIR?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/340351877_IMPLEMENTATION_OF_AN_AUTOMATIC_QUESTION_ANSWERING_SYSTEM_USING_MACHINE_LEARNING_SAKIF_AHMED_ABIR?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sakif_Abir?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sakif_Abir?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Liberal_Arts_Bangladesh_ULAB?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sakif_Abir?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sakif_Abir?enrichId=rgreq-e530b79369a0a6b2a334e22ea9e60ae4-XXX&enrichSource=Y292ZXJQYWdlOzM0MDM1MTg3NztBUzo4NzU1MzY3NjMwNjg0MTZAMTU4NTc1NTYzOTY1Mw%3D%3D&el=1_x_10&_esc=publicationCoverPdfIMPLEMENTATION OF AN AUTOMATIC QUESTION ANSWERING SYSTEM USING MACHINE LEARNING SAKIF AHMED ABIR IMPLEMENTATION OF AN AUTOMATIC QUESTION ANSWERING SYSTEM USING MACHINE LEARNING SAKIF AHMED ABIR A project/report submitted in partial fulfilment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering UNIVERSITY OF LIBERAL ARTS BANGLADESH Dhaka, Bangladesh SEPTEMBER, 2019i | P a g e DECLARETION This project report is submitted to the Computer Science & Engineering, University of Liberal Arts Bangladesh in partially fulfillment of the requirement for the degree of Bachelor of Science. So, I hereby, declare that this thesis report is based on the study of this system. Materials of work found by other researchers are mentioned by reference. This project report, neither in whole, nor in part, has been previously submitted for any degree. Signature : Name : SAKIF AHMED ABIR ID : 151014085 Date : September 19, 2019 ii | P a g e CERTIFICATE OF APPROVAL The project report entitled “Automatic Question Answering system using Machine Learning” is submitted to the Department of Computer Science & Engineering. University of Liberal Arts Bangladesh, Dhaka in partially fulfillment of the requirement for the degree of Bachelor of Science. Dated: September 2019. Khan Raqib Mahmud Lecturer Department of Computer Science and Engineering University of Liberal Arts Bangladesh --------------------------------------- Signature and Date Dr. Mohammad Shahriar Rahman Associate Professor and Head of Department (Acting) Department of Computer Science and Engineering University of Liberal Arts Bangladesh --------------------------------------- Signature and Date iii | P a g e DEDICATION I dedicate my dissertation work to my parents who never stop giving of themselves in countless ways. Special feelings of gratitude to my loving parents, whose words of encouragement and push for tenacity ring in my ears and they never left my side and are very special. I also dedicate this dissertation to my faculty members who have supported me throughout the process. I will always appreciate all they have done, especially my supervisors, Khan Raqib Mahmud sir and Dr. Abul Kalam Al Azad sir for helping me develop my technological and analogical skills and for the many hours of counseling and encouraging and supporting me for entire research. I also want to add Dr. Mohammad Shahriar Rahman sir, our beloved Head of Department who encouraged me throughout my time during graduation. iv | P a g e ACKNOWLEDGEMENT I wish to express my deepest appreciation to all those who helped me, in one way or another, to complete this project. First and foremost, I thank Almighty Allah who provided me with strength, direction and purpose throughout the project. I would like to express my deep and</s> |
<s>sincere gratitude to my project supervisors, Khan Raqib Mahmud Sir and Dr. Abul Kalam Al Azad sir for giving me the opportunity to work with them in this project, I was able to overcome all the obstacles that I encountered in these enduring three months of my project. In fact, they always gave me immense hope every time I consulted with them over problems relating to my project. I would like to extend my indebtedness and thanks to them for their guidance and valuable advice at every step of the thesis. I will be forever thankful to you. I can never pay you back for all the help you have provided me, the experience you have helped me gain by working for you. v | P a g e ABSTRACT “Automatic Question Answering System” has been identified as one of the attracting and most doing research areas in present time. Automatic Question answering is a task which is basically used for flexibility and availability. This system already has drawn the attention of many researchers due to its varying applications such as websites, customer care service, prospect engagement, brand story telling. There are multiple approaches can be taken in consideration to process it so that the system can give support as close to as a human. Among them deep learning-based approaches have been provided with the state-of-the-art performance according to particular Natural Language Processing task. Our approach is to build this model using Machine Learning approach. The experiment shows that our QA system is able to give responses to the user in real time. At first, it takes input then search for the matching answers in the database, then comes up with an answer. We have used out own BENGALI DATASET to train our model, which consist of around 1 thousand test question and answers. We divided the train data and test data into 60% and 40 % respectively. We used this dataset because there is no available Bengali corpus which we can use, but the bigger the dataset is the more accuracy will be gain by the system. The Algorithm is we give input the input statement is processed by logic adapters and return response with answer. Based on the evaluation of users, we can say QAs can produce natural responses but in English. In response section we are facing main issue, it is taking inputs in Bengali but giving responses in English. This can be a work for the future research. vi | P a g e TABLE OF CONTENTS CHAPTER TITLE PAGE DECLARATION…………………………………………...……………........... i CERTIFICATE OF APPROVAL………………………….……………….… ii DEDICATION……………………………………………...………………..... iii ACKNOWLEDGEMENT……………………………….………………........ iv ABSTRACT……………………………………………....………………......... v TABLE OF CONTENTS…………………………………………………..…. vi LIST OF FIGURES……………………………………………………….….. ix 1. INTRODUCTION…………………………………………………….... 1 1.1 Introduction……………………………………………………………… 1 1.2 Problem Identification………………………………………………...…. 2 1.3 Motivation……………………………………………………………….. 2 1.4 Thesis Contribution……………………………………………………… 3 1.5 Thesis Outline…………………………………………………………… 4 2. LITERATURE REVIEW………………………………………………. 5 2.1 Brief knowledge about QA system……………………………………… 5 2.2 ChatBot…………………………………………………………………. 7 2.3 Related Works…………………………………………………………... 10 2.3.1 Statistical QA………………………………………………………… 10 2.3.2 Neural QA……………………………………………………………. 10 2.3.3 E-Commerce Question Answering…………………………………… 11 2.3.4 Reading Comprehension……………………………………………... 11</s> |
<s>2.3.5 Sequence to Sequence Architecture………………………………….. 12 2.3.6 Chat-Bot for collage Management System Using A.I………………... 12 2.3.7 Product-Aware Answer Generation………………………………….. 13 vii | P a g e 3. BACKGROUND………………………………………………………… 14 3.1 Natural Language Processing (NLP)……………………………………. 14 3.2 Machine Learning – ML………………………………………………… 14 3.3 Naive Bayesian Classifier….……………………………………………. 15 3.4 Search Algorithm….…………………………………………………….. 17 4. METHODOLOGY……………………………………………………… 19 4.1 Model Description………………………………………………………. 19 4.2 Algorithm and Flowchart………………………………………………... 22 4.3 Training………………………………………………………………….. 23 4.3.1 Storage Adapters ……………………………………………….. 25 4.3.2 Input Adapters………………………………………………....... 26 4.3.3 Output Adapters………………………………………………… 26 4.3.4 Logic Adapters………………………………………………..… 26 4.3.5 Response Selection Method…………………………………..… 28 4.3.6 Statement – Response Relationship…………………………….. 29 4.4 Environment Setup…..…….……………………..……………………….30 4.5 Implementation…….……………………………………………………. 32 5. SIMULATION AND RESULT………………………………………… 33 5.1 Hardware Used for Simulation………………………………………….. 33 5.2 Software and Libraries Used for Simulation……………………………. 33 5.3 Result……………………………………………………………………. 34 5.3.1 Screenshots of the test Result…………………………………... 35 5.4 Analysis…………………………………………………………………. 42 6. CONCLUSION AND FUTURE WORK………………………………. 44 6.1 Conclusion………………………………………………………………. 44 6.2 Limitations………………………………………………………………. 45 6.3 Further Work…………………………………………………………….. 45 viii | P a g e REFERENCES……………………………………………………………………. 46 APPENDIX A………………………………………………………………….….. 49 APPENDIX B…………………………………………………………………..….. 70 ix | P a g e LIST OF FIGURES Figure 3.1: Workflow of Naive Bayes Classifier…………………………………………….. 17 Figure 4.1: Block Diagram of QA System………………………………………………...….. 19 Figure 4.2: Flowchart of our Proposed system……………………………………………...… 21 Figure 4.3: Process flow diagram of Chatterbot…………………………………………..…... 22 Figure 4.4: Pseudocode of an Instance showing how to use the list trainer class………..…… 24 Figure 4.5: Pseudocode for the list trainer class………………………………………….…… 24 Figure 4.6: Pseudocode for corpus trainer class of Chatterbot...……………………………... 25 Figure 4.7: Pseudocode for the BestMatch logic adapter…………………………………….. 27 Figure 4.8: The Relationship between Statement and Responses…………………..…..…….. 29 Figure 4.9: Mechanism of the reference to all parent statements of the current Statement….... 29 Figure 4.10: Official Unicode Consortium Code Chart of Bengali…………………………... 30 Figure 4.11: Bengali Alphabet…………………………………………………………..……..31 Figure 5.1: Snapshot of the response…………………………………………………………. 35 Figure 5.2: Snapshot of the response…………………………………………………………. 36 Figure 5.3: Snapshot of the response…………………………………………………………. 37 Figure 5.4: Snapshot of the response…………………………………………………………. 38 Figure 5.5: Snapshot of the response…………………………………………………………. 39 Figure 5.6: Snapshot of the response…………………………………………………………. 40 Figure 5.7: Snapshot of the response…………………………………………………………. 41 Figure A.1: Snapshot of the chat.py file…………..…………………………………………. 49 Figure A.2: Snapshot of the chat.py file…………..…………………………………………. 50 Figure A.3: Snapshot of the chat.py file…………..…………………………………………. 51 Figure A.4: Snapshot of the chat.py file…………..…………………………………………. 52 Figure A.5: Snapshot of the chat.py file…………..…………………………………………. 53 Figure A.6: Snapshot of the chat.py file…………..…………………………………………. 54 Figure A.7: Snapshot of the chat.py file…………..…………………………………………. 55 x | P a g e Figure A.8: Snapshot of the model.py file…………..……………………………………….. 56 Figure A.9: Snapshot of the model.py file…………..……………………………………….. 57 Figure A.10: Snapshot of the model.py file…..……..……………………………………….. 58 Figure A.11: Snapshot of the model.py file…..……..……………………………………….. 59 Figure A.12: Snapshot of the model.py file……..…..……………………………………….. 60 Figure A.13: Snapshot of the model.py file…………....…………………………………….. 61 Figure A.14: Snapshot of the utils.py file……………………………………………………. 62 Figure A.15: Snapshot of the utils.py file……………………………………………………. 63 Figure A.16: Snapshot of the utils.py file……………………………………………………. 64 Figure A.17: Snapshot of the utils.py file……………………………………………………. 65 Figure A.18: Snapshot of the train.py file……………………………………………………. 66 Figure A.19: Snapshot of the train.py file……………………………………………………. 67 Figure A.20: Snapshot of the train.py file……………………………………………………. 68 Figure A.21:</s> |
<s>Snapshot of the train.py file……………………………………………………. 69 Figure B.1: Snapshot of the Database………………………………………………..………. 70 Figure B.2: Snapshot of the Database………………………………………………..………. 71 Figure B.3: Snapshot of the Database………………………………………………..………. 71 Figure B.4: Snapshot of the Database………………………………………………..………. 72 Figure B.5: Snapshot of the Database………………………………………………..………. 72 Figure B.6: Snapshot of the Database………………………………………………..………. 73 Figure B.7: Snapshot of the Database………………………………………………..………. 73 Figure B.8: Snapshot of the Database………………………………………………..………. 74 Figure B.9: Snapshot of the Database………………………………………………..………. 74 1 | P a g e CHAPTER 1 INTRODUCTION 1.1 INTRODUCTION The absolute goal of the Artificial Intelligence research is to build a machine which can converse with a human such that no one can differentiate it from a real human being. After Alan Turing proposed his Turing Test in 1950 in his famous work "Computing Machinery and Intelligence" [16], it has been almost 60 years that researchers are trying to pass the test. As a part of the research to develop an intelligent conversational agent, in 1964, Eliza [14], a simulation of a Rogerian psychotherapist, was developed in Massachusetts Institute of Technology (MIT). It was capable of replying the sentences used by the users back to them. It was the beginning of the research on the conversational agent. A conversational agent is a program which can converse in a natural language with the user either based on the knowledge base (retrieval based closed domain model) or by generating new sentences (generative based open domain model) in a chat interface and sometimes, it is able to perform actions based on the conversations (goal-oriented chatbots, for example, pizza ordering chatbot) [4]. It is popularly known as chatbot or chatterbot or bot. A Question Answering system or simply Chatbot is one of the main concerns of the study of Human-Computer Interaction (HCI) [7].We can cite the examples of Cleverbot or Simsimi, automated tutorials and online assistants as chatbots in use [11]. The use cases of chatbot include customer care representative, sales agent, FAQ answerers etc. With the rise of smartphones and other high computational devices, chatbot becomes one of the hot topics of Natural Language Processing (NLP) research. This is to be noted that all research were based on only one language which is English [10]. Although some works have been done on Chinese and Spanish languages but there are very few in number. Other languages were left out because of the lack of quality data corpus and natural language processing tools. 2 | P a g e We evaluate the chatbot based on user’s satisfaction. Zhou advised that “Evaluation should be adapted to the application and to user needs [24]. If the Chabot is meant to be adapted to provide a specific service for users, then the best evaluation is based on whether it achieves that service or task” [26]. The main task of our research is to interact with the user in fluent and syntactically correct English. 1.2 PROBLEM IDENTIFICATION Our proposed thesis is about the automatic question answering system as the main scope of the problem. For recent years the</s> |
<s>automatic question answering system has been such accepted issue due to increasing number of websites, online shopping sites and telecommunication services. To reduce the human labour and flexible along with quick response privilege different type of approaches have been taken to develop a system for automatic question answering system. To keep pace with the fast growing world automatic question answering systems play a vital role. These systems are tremendous asset for any kind of online and offline system. The fundamental thought behind QA system is to assist man-machine interaction. 1.3 MOTIVATION Last decade was the decade where a technological revolution took place. With the increase in usage of smartphones, the number of the social network users and mobile app users has increased manifold [29]. Now the question is “What’s next?”. From recent research in the pattern of consumer behavior related to smartphones, it is being seen that users has limited them to few numbers of the app and spends most of the time there. So the post- app era demands a new trend which can be chatbot. The user usually searches the solutions of their problems in Google, Yahoo, and other search engines but either they do not retrieve concise or relevant information [18], or they retrieve documents or links to these documents instead of an appropriate answer to their problems. To address such problem the idea of chatbot arises in which user asks in natural language and receives a concise and appropriate answer [14]. 3 | P a g e Chatbots can be new updated version of the mobile app or the search engines which will be able to interact with the user in natural language. It is being seen that the limited number of apps in which the users have confined them to are mostly social and messaging apps. So we can surely say it is a positive indication for the development of chatbots. Age of interacting with computers with predetermined commands or clicking on the graphical user interface is long gone, now it is the demand of time the computer starts to take commands in natural language [22]. With the growth of online-based services like shopping, ordering foods or any official works, it has become necessary to build chatbot to handle large customer base from all over the world at any time. Recently many developments took place in NLP research so with the large availability of tools for building conversation agent it is the time to transition to taking natural language inputs interface. “The need of conversational agents has become acute with the widespread use of personal machines with the wish to communicate and the desire of their makers to provide natural language interfaces.” [15]. There is no Bengali chatbot available currently, one has been made named “Golpo” but we have found only the research paper. The physical software doesn’t exists. So all the above-mentioned reasons are the factors that played an influential role which motivates us to build an automatic question answering system or chatbot. 1.4</s> |
<s>THESIS CONTRIBUTION For better understanding the internal working process and different functions of Machine Learning is to be needed as an active area of research. As our main focus is to create a system for Answering Questions from user automatically by training. So, we will contribute through the thesis that we will develop a model/system that will efficiently can answer and learn from different questions. 4 | P a g e 1.5 THESIS OUTLINE In Chapter 1 we introduces the treatise of human-computer interaction, the need of Chabot and motivation behind this work. In Chapter 2 same topics related papers and existing automatic question answering systems are discussed. In Chapter 3, the main architecture/methodology for this thesis is discussed. In Chapter 4, Experiments by the systems were highlighted. The experimental results, which is obtained from the system are presented answer are discussed in the section 5. The report is ended in Chapter 6 with the summary of the system and relative discussion regarding future works. 5 | P a g e CHAPTER 2 LITERATURE REVIEW In this chapter we will know a little brief about QA system and previous works on QA systems or chatbots. 2.1 BRIEF KNOWLEDGE ABOUT QA SYSTEM Question answering is an important end-user task at the intersection of natural language processing (NLP) and information retrieval (IR). QA systems can bridge the gap between IR-based search engines and sophisticated intelligent assistants that enable a more directed information retrieval process. Such systems aim at finding precisely the piece of information sought by the user instead of documents or snippets containing the answer. A special form of QA, namely extractive QA, deals with the extraction of a direct answer to a question from a given textual context. The creation of large-scale, extractive QA datasets [13] sparked research interest into the development of end-to-end neural QA systems. A typical neural architecture consists of an embedding-, encoding-, interaction- and [22] answer layer p. Most such systems describe several innovations for the different layers of the architecture with a special focus on developing powerful interaction layer that aims at modeling word-by-word interaction between question and context. Although a variety of extractive QA systems have been proposed, there is no competitive neural baseline. Most systems were built in what we call a top-down process that proposes a complex architecture and validates design decisions by an ablation study. Most ablation studies, however, remove only a single part of an overall complex architecture and therefore lack comparison to a reasonable neural baseline [27]. This gap raises the question whether the complexity of current systems is justified solely by their empirical results. Another important observation is the fact that seemingly complex questions might be answerable by simple heuristics. Let’s consider the following example: 6 | P a g e Although it seems that evidence synthesis of multiple sentences is necessary to fully understand the relation between the answer and the question, answering this question is easily possible by applying a simple context/type matching heuristic. The heuristic</s> |
<s>aims at selecting answer spans that a) match the expected answer type (a time as indicated by “When”) and b) are close to important question words [26]. The actual answer “1688-1692” would easily be extracted by such a heuristic. Early work on chatbots [21] relied on handcrafted templates or heuristic rules to do response generation, which requires huge effort but can only generate limited responses. Recently, researchers begin to develop data driven approaches [18]. Statistical goal-oriented dialogue systems have long been modeled as partially observable Markov decision processes (POMDPs) [19]. And are trained using reinforcement learning based on user feedback. [11], recently applied deep reinforcement learning successfully to train non-goal oriented chatbot type dialogue agents. They show that reinforcement learning allows the agent to model long-term rewards and generate more diverse and coherent responses as compared to supervised learning. Retrieval based methods select a proper response by matching message response pairs [3]. Retrieval-based methods [5] retrieve response candidates from a pre-built index, rank the candidates, and select a reply from the top ranked ones. In related work, we found response selection for retrieval-based chatbots in a single turn scenario, because retrieval-based methods can always return fluent responses [8] and single turn is the basis of conversation in a chatbot. 7 | P a g e 2.2 CHATBOT A conversational agent which can converse with a human, based on the provided knowledge base and the natural language it was trained on, in any platform e.g. mobile, website or desktop application etc. is called a chatbot. After Eliza was created, chatbot for long was one of the most sought topics of academic interest among AI researchers. But it was not until 2016, it gained the interest of general mass. With the launch of smartphone based chatbots such as the Apple Siri [3], Amazon Echo, and China's WeChat [4], chatbots turn into one of the hottest trends in technology. Apart from this, some technological giant companies like Facebook Messenger and Skype declared to give full support to the developers for the development of chatbot. Google, the biggest corporation in technology, entered the competition by launching a chatbot application (Allo) powered by its artificial intelligence (AI) and big data. Human-computer interaction is one of the most difficult challenges in Natural Language Processing (NLP) research. It is a combination of different fields which facilitate communication between users and computers using a natural language depending solely on the language and the available natural language processing techniques [14]. The whole world is entering an era of conversational agents. The era of talking machines is not very far away. In words of Alan Turing, we can say “I propose to consider the question, 'Can machines think?” [16]. Much work has been done in information retrieval (IR), machine translation, POS tagging, annotation, and auto-summarization. Although there is quite a large literature on the development of an intelligent machine but still researchers are not successful in making an intelligent machine which can pass Turing Test. Because an intelligent conversational agent</s> |
<s>is the combination of all the fields of Natural Language Processing (NLP). With the advent of smart personal assistants like Siri, Google Chrome, and Cortana [25], we may hope for the fulfillment of the dream of Colby. “Before there were computers, we could distinguish persons from non-persons on the basis of an ability to participate in conversations. But now, we have hybrids operating between a person and non-persons with whom we can talk in ordinary language.” [3]. 8 | P a g e For achieving this goal AI researchers are working relentlessly to make chatbot which can talk like a human. The purpose of a chatbot system is to simulate a human conversation; the chatbot architecture integrates a language model and computational algorithms to emulate informal chat communication between a human user and a computer using natural language. Naturally, chatbot can extend daily life, such as help desk tools, automatic telephone answering systems, to aid in education, business, and e-commerce [6]. Although researchers get success in building chatbot using the retrieval based method but they do not have much success in the generative based method. As [18] has pointed out the cause of it is not having an appropriated database and the probability of a slightly different answer can lead to a different conversation [23]. The main drawback of the generative method is grammatically incorrect and inconsistent sentences. The present time is the transition period of transforming technology taking commands from predetermined commands to taking inputs from natural language. It is being predicted that chatbot is the future of search engines because it is one of the easiest ways to fetch information from a system. The most important advantage of chatbot based search engine is users can easily search by writing in natural language instead of looking up in a search engine or browse several web pages to collect information. The chatbot conversation framework falls into two categories: retrieval based and generative based chatbot [31]. Often considered as an easier approach, the retrieval-based model uses a knowledge base of predefined responses and employs a pattern matching algorithm with a heuristic to select an appropriate response [21]. The retrieval based systems do not generate any new text. They can reply on within the domain of their knowledge base. Generative models do not have any knowledge base. So they generate new text in every response. This model relies on the machine translation techniques. If we compare both the model, we will find advantages and disadvantages in both of them. Since the knowledge base of the retrieval based model is handcrafted by the developer it is not prone to syntactical mistakes. But its disadvantage is it cannot give respond to anything beyond the scope 9 | P a g e of its knowledge base. On the other hand, generative models are very difficult to train and prone to grammatical mistakes. The chatbot framework can be again divided into two types based on its domain: closed domain and open domain 1. The closed domain</s> |
<s>chatbots are those which can reply to a limited number of subjects. A very good example would be goal based chatbot. An open domain chatbot does not have any knowledge base so it has to generate new sentence for each interaction. Since it has no goal so the users can take the conversation to anywhere. Often unrelated, inconsistent and grammatically incorrect sentences are produced in an open domain modeled chatbot. 2. So it is very difficult to build a good open domain chatbot which overcomes all the defaults whereas close domain can be easily built if the corpus is available. As we have discussed the framework let us briefly discuss the internal mechanism of chatbot. There are three important types of artificial intelligence services which are needed to build a chatbot. [27] 1. Rule-based pattern recognition: Mainly any retrieval based chatbot relies on this rule-based pattern recognition. In this model, the rules are the regular expressions. The advantage of a regular expression is that they are flexible and in the case of need new expressions can be created. 2. Natural language classifier: It is used to detect and classify intent of a user command. 3. Rule-based conversation manager: This service can apply rules and generate scripted responses based on the user's intent and data that is associated with the entities, such as location and time. 10 | P a g e Therefore, we discuss the definition, the state of art, the classification of the chatbot framework, internal mechanism and classification of artificial intelligence services to build a chatbot to introduce the background of our work. 2.3 RELATED WORKS In this section we’ll discuss about some previous work related to Question answering systems. 2.3.1 Statistical QA Traditional approaches to question answering typically involve rule-based algorithms or linear classifiers over hand-engineered feature sets. Richardson et al. (2013) proposed two baselines, one that uses simple lexical features such as a sliding window to match bags of words, and another that uses word-distances between words in the question and in the document. Berant proposed an alternative approach in which one first learns a structured representation of the entities and relations in the document in the form of a knowledge base, then converts the question to a structured query with which to match the content of the knowledge base [2]. Wang described a statistical model using frame semantic features as well as syntactic features such as part of speech tags and dependency parses [21]. Chen proposed a competitive statistical baseline using a variety of carefully crafted lexical, syntactic, and word order features [3]. 2.3.2 Neural QA Neural attention models have been widely applied for machine comprehension or question-answering in NLP. Hermann proposed an Attentive Reader model with the release of the CNN/Daily Mail cloze-style question answering dataset. Hill released another dataset stemming from the children’s book and proposed a window-based memory network. Kadlec presented a pointer-style attention mechanism but performs only one attention step. Sordoni introduced an iterative neural attention model and applied it to</s> |
<s>cloze-style machine comprehension tasks [19]. 11 | P a g e 2.3.3 E-Commerce Question-Answering In recent years, product aware question answering has received several attention. Most of existing strategies aim at extracting relevant sentences from input text to answer the given question propose a framework for opinion QA, which first organizes reviews into a hierarchy structure and retrieves review sentence as the answer propose an answer prediction model by incorporating an aspect analytic model to learn latent aspect-specific review representation for predicting the answer. External knowledge has been considered with the development of knowledge graphs. McCauley propose a method using reviews as knowledge to predict the answer, where they classify answers into two types, binary answers (i.e. “yes” or “no”) and open-ended answers. Incorporating review information, recent studies employ ranking strategies to optimize an answer from candidate answers. Meanwhile, product aware question retrieval and ranking has also been studied. Cui propose a system which combines questions with RDF triples. Yu propose a model which retrieves the most similar queries from candidate QA pairs, and uses corresponding answer as the final result. However, all above task settings differ from our task. Unlike above approaches, our method is aimed to generate an answer from scratch, based on both reviews and product attributes. 2.3.4 Reading Comprehension Given a question and relevant passages, reading comprehension extracts a text span from passages as an answer. Recently, based on a widely applied dataset, i.e., SQuAD, many approaches have been proposed. Seo use bi-directional attention flow mechanism to obtain a query aware passage representation. Wang [32] propose a model to match the question with passage using gated attention-based recurrent networks to obtain the question-aware passage representation. Consisting exclusively of convolution and self-attention, QANet achieves the state-of-the-art performance in reading comprehension. As mentioned above, most of the effective methods contain question-aware passage representation for generating a better answer. This mechanism make the models focus on the important part of passage according to the question. 12 | P a g e 2.3.5 Sequence-to-sequence architecture In recent years, sequence to-sequence (seq2seq) based neural networks have been proved effective in generating a fluent sentence. The seq2seq model is originally proposed for machine translation and later adapted to various natural language generation tasks, such as text summarization and dialogue generation. Rush apply the seq2seq Mechanism with attention model to text summarization field. Then See et al. add copy mechanism and coverage loss to generate summarization without out-of-vocabulary and redundancy words. The seq2seq architecture has also been broadly used in dialogue system. Tao propose a multi-head attention mechanism to capture multiple semantic aspects of the query and generate a more informative response. Different from seq2seq models, our model utilizes not only the information in input sequence but also many external knowledge from user reviews and product attributes to generate the answer that matches the facts. Unlike traditional seq2seq model, there are several tasks which input data is in key value structure instead of a sequence. In order to utilize these data when generating</s> |
<s>text, key-value memory network (KVMN) is purposed to store this type of data. He et al. incorporate copying and retrieving knowledge from knowledge base stored in KVMN to generate natural answers within an encoder-decoder framework. This uses a KVMN to store the translate history which gives model the opportunity to take advantage of document-level information instead of translate sentences in an isolation way [27]. 2.3.6 Chat-Bot for College Management System Using A.I Question Answering (QA) systems can be identified as information accessing systems which try to answer to natural language queries by providing answers instead of providing the simple list of document links. QA system selects the most appropriate answers by using linguistic features available in natural language techniques. They differ mainly from the knowledge sources, the broadness of Dialog Systems (NLDS) is an appropriate and easy way to access information. QA system based on Semantic enhancement as well as the implementation of a domain-oriented based on a pattern-matching chat-bots technology developed within an industrial project (FRASI). The proposed approach simplifies the chat-bots realization which uses two solutions. First one is the ontology, which is exploited in a twofold manner: to construct answers very actively 13 | P a g e as a result of an deduction process about the domain, and to automatically populate, off-line, the chat-bots KB with sentences that can be derived from the ontology, describing properties and relations between concepts involved in the dialogue. Second is to preprocess of sentences given by the user so that it can be reduced to a simpler structure that can be directed to existing queries of the chat-bots. The aim is to provide useful information regarding products of interest supporting consumers to get what they want exactly. The choice was to implement a QA system using a pattern-matching chat-bots technology [23]. 2.3.7 Product-Aware Answer Generator (Paag) They have proposed the task of product-aware answer generation, which aims to generate an answer for a product-aware question from product reviews and attributes. To address this task, they have proposed product-aware answer generator (PAAG): An attention-based question aware review reader is used to extract semantic units from reviews, and key-value memory network based attribute encoder is employed to fuse relevant attributes. In order to encourage the model to produce answers that match facts, they have employed an adversarial learning mechanism to give additional training signals for the answer generation. To tackle the shortcomings of vanilla GAN, they have applied the Wasserstein distance as value function in the training of consistency discriminator. In their experiments, they have demonstrated the effectiveness of PAAG and have found significant improvements over state-of-the-art baselines in terms of metric-based evaluations and human evaluations. Moreover, we have verified the effectiveness of each module in PAAG for improving product-aware answer generation [21]. 14 | P a g e CHAPTER 3 BACKGROUD In this chapter we will talk about Natural language processing, Machine learning, Deep Learning, Neural Network and Deep Neural Networks. We will also understand how they works, what are the</s> |
<s>advantages and limitations. Also we will see the general work flow diagram. 3.1 NATURAL LANGUAGE PROCESSING - NLP Natural Language Processing (NLP) is a research area of Artificial Intelligence (AI) which focus in the study and development of systems that allows the communication between a person and a machine through natural language [22]. Chatbots belong to the area of NLP given the importance of their ability to understand natural language and know how to extract relevant information from it. Both models, retrieval-based and generative-based must be able to identify some information from the input sentence in order to pick or create an answer. 3.2 MACHINE LEARNING - ML Machine Learning is a field of study of AI that studies and develop techniques capable to learn tasks as classification or regression from a data set. There are different algorithms without being any of them, in general, better among the others (No Free Lunch Theorem) [2]. The suitability of one algorithm in particular, depends exclusively on the nature and type of the problem addressed. The aim of a learning algorithm is to estimate the behavior of a training set by the identification of their inherent pattern. Once accomplished, it must be capable to perform tasks as classification or regression given unseen samples. All the learning algorithms require a learning phase at which, an objective function is defined as a metric to optimize in order to get a reference of how well our model fits to the problem (e.g. Minimization of the error function). Then, the algorithm iterates through the training set looking for the optimization of the metric. It is important to have three disjoint sets of samples in machine learning algorithms: training, validation and test set [9]. The training set is used as examples for the objective function optimization. A validation 15 | P a g e set is required when it is necessary to compute the optimal parameters of an algorithm. Finally, the test set is used to test how well the algorithm has learned and generalized the problem. 3.3 NAIVE BAYESIAN CLASSIFIER In machine learning, naïve Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes has been studied extensively since the 1960s. It was introduced (though not under that name) into the text retrieval community in the early 1960s, and remains a popular (baseline) method for text categorization, the problem of judging documents as belonging to one category or the other (such as spam or legitimate, sports or politics, etc.) with word frequencies as the features. With appropriate pre-processing, it is competitive in this domain with more advanced methods including support vector machines. It also finds application in automatic medical diagnosis [3]. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation</s> |
<s>as used for many other types of classifiers. In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naïve Bayes is not (necessarily) a Bayesian method. Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the 16 | P a g e probability that this fruit is an apple, regardless of any possible correlations between the color, roundness, and diameter features. For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods. Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests. An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification. Bayes’ Theorem finds the probability of an event occurring given the probability of another event that has already occurred. Bayes’ theorem is stated mathematically as the following equation: Where A and B are events and P(B) = 0. I. Basically, we are trying to find probability of event A, given the event B is true. Event B is also termed as evidence. II. P(A) is the priori of A (the prior probability, i.e. Probability of event before evidence is seen). The evidence is an attribute value of an unknown instance (here, it is event B). 17 | P a g e III. P(A|B) is a posteriori probability of B, i.e. probability of event after evidence is seen. Note that there is very little explicit training in Naive Bayes compared to other common classification methods. The only work that must be done before prediction is finding the parameters for the features’ individual probability distributions, which can typically be done</s> |
<s>quickly and deterministically. This means that Naive Bayes classifiers can perform well even with high-dimensional data points and/or a large number of data points. Figure 3.1: Workflow of Naive Bayes Classifier 3.4 SEARCH ALGORITHM Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform some kind of search algorithm in the background in order to achieve their tasks. I. A search problem consists of: a. A State Space. Set of all possible states where you can be. b. A Start State. The state from where the search begins. c. A Goal Test. A function that looks at the current state returns whether or not it is the goal state. 18 | P a g e II. The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the goal state. III. This plan is achieved through search algorithms. There are so many search algorithms available. Here’s is some major search algorithm that are used regularly, divided into two sections: I. Uninformed Search: The search algorithms in this section have no additional information on the goal node other than the one provided in the problem definition. The plans to reach the goal state from the start state differ only by the order and/or length of actions. Uninformed search is also called Blind search. The following uninformed search algorithms are: a. Depth First Search b. Breadth First Search c. Uniform Cost Search II. Informed Search: Here, the algorithms have information on the goal state, which helps in more efficient searching. This information is obtained by something called a heuristic. The following informed search algorithms are: a. Greedy Search b. A* Search c. Graph Search 19 | P a g e CHAPTER 4 METHODOLOGY In this chapter the methodology of our system has been described. First we discussed about the model description after that a detail discussion of chatterbot and then in 4.4 we discussed the Algorithm and flowchart then training methods. And this chapter finishes with the implementation of our QA system. 4.1 MODEL DESCRIPTION We propose a simple decoder and encoder based conversational model agent that will provide chatbot users with an entity from a Knowledge base (KB) by interactively asking for its attributes Figure 4.1: Block diagram of QA System User Input Knowledge Base Logic Adapter 1 Logic Adapter 2 Generated Response 20 | P a g e Most of the works related to conversational agents are done on a retrieval based model. The key to the success of response selection lies in accurately matching input messages with proper responses. Our approach for response generation is retrieval based. Retrieval-based model is a model for chatbots which retrieve responses from its knowledge base. It generates a response based on the heuristics, the user’s input and the context. Suppose, The input to a retrieval-based model is a text t , A potential response is r, Then, The output of the model is a confidence score</s> |
<s>C for the response. C is a function of ConfidenceValue(t, r). The r with the highest score C is the response which will be sent to the output adapter. To find a good response you would calculate the score for multiple responses and choose the one with the highest score. “Selecting a potential response from a set of candidates is an important and challenging task for open-domain human-computer conversation, especially for the retrieval-based human-computer conversation” [23]. Since there is a lot of difficulties in building an open domain chatbot so we want to build a domain specific chatbot. So we are providing it with an initial knowledge base and it can always improve its performance measure by learning from the responses of the users. The workflow of the chatbot is simple and effective. i. We will get input from the conversational platform or chat platform ii. We will process the received input. The input statement will be processed by an algorithm which will find the best likelihood valued response for the query. The algorithm will select all the known statements that most closely matches the input statement. It will return the known responses to the selected match and a confidence score value based on matching after computation of each of the response. Here the 21 | P a g e confidence score is the likelihood value of the response. The algorithm will return the response that generated the highest likelihood value for itself. iii. Finally, the response to the input will be returned to the user. For successful completion of user goals, it is also necessary to equip the dialogue policy with real-world knowledge from a database. For constructing this end-to-end system, the following goals can be achieved this by constructing a symbolic query from the current belief states of the agent and retrieving results from the database which match the query. Figure 4.2: Flowchart of the Proposed System Based on machine learning, ChatterBot is a conversational dialog engine powered by Python which is capable of giving responses based on a knowledge base. We choose this engine for our system because it is language independent. Since Chatterbot has no language dependency in its design, so it is allowed to be trained to speak any language. It is a Python library that makes it easy to generate automated responses to a user’s input for the creation of chatbot in any language. To produce different types of responses, ChatterBot applies a selection of machine learning algorithms. This very feature makes it easy for developers to create chatbots and automate conversations with users. The main class of the chatbot is a connecting point between each of ChatterBot’s adapters. In this class, an input statement is returned from the input adapter, processed 22 | P a g e and stored by the logic and storage adapters, and then passed to the output adapter to be returned to the user [15]. Additionally, the machine-learning nature of ChatterBot allows an agent instance to improve its own</s> |
<s>knowledge of possible responses as it interacts with humans and other sources of informative data. An untrained instance of ChatterBot starts off with no knowledge of how to communicate. Each time a user enters a statement, the library saves the text that they entered and the text that the statement was in response to. As ChatterBot receives more input the number of responses that it can reply and the accuracy of each response in relation to the input statement increase. The program selects the closest matching response by searching for the closest matching known statement that matches the input, the chatbot then chooses a response from the selection of known responses to that statement [13]. 4.2 ALGORITHM AND FLOWCHART Figure 4.3: Process Flow Diagram of ChatterBot https://chatterbot.readthedocs.io/en/stable/#term-statementhttps://chatterbot.readthedocs.io/en/stable/#term-response23 | P a g e Since our system is a retrieval based closed domain chatbot its success lies on the pattern matching algorithm. The algorithm of our system is as follows: 1. Our system takes input from the console or any API, after taking input it sends it to the processing unit. 2. In processing part of the system, there are two logic adapters. The text given by the users Is matched with the existing queries in the database and the sentence which matches with the input is selected. For a selected query, there can be multiple responses in the knowledge base so a confidence score for each of the response for the selected sentence is being calculated. In the second logic adapter, a similar procedure is being carried out. So we get two best responses with highest confidence score from the two adapters. Among the two, the one which got the highest confidence score is sent to the output adapter. 3. The step 1, 2 and 3 continues in a loop until the user exits the console. 4. When the user gives an input, it is stored in the knowledge base as a new query. So with each interaction with the user, the knowledge base learns a new query or response. 4.3 TRAINING ChatterBot includes tools that help simplify the process of training a chat bot instance. ChatterBot’s training process involves loading example dialog into the chat bot’s database. This either creates or builds upon the graph data structure that represents the sets of known statements and responses. When a chat bot trainer is provided with a data set, it creates the necessary entries in the chat bot’s knowledge graph so that the statement inputs and responses are correctly represented [7]. Several training classes come built-in with ChatterBot. These utilities range from allowing you to update the chat bot’s database knowledge graph based on a list of statements representing a conversation, to tools that allow you to train your bot based on a corpus of preloaded training data. The training of our system can be done in two processes: 24 | P a g e i. Training via list Data: This training process allows a chatbot to be trained using a list</s> |
<s>of strings where the list represents a conversation. In this case, the order of each response is based on its placement in a given conversation or the list of string. 1. Import Chatterbot and Trainer Class Library 2. Set the trainer as List Data Trainer 3. Provide the list of strings which you want to provide for training 4. Train the Chatbot 5. Give input and get response Figure 4.4: Pseudocode of an Instance showing how to use the List Trainer Class Figure 4.5: Pseudocode for the List Trainer Class ii. Training via Corpus Data: ChatterBot comes with a corpus data and utility module that makes it easy to quickly train your bot to communicate. To do so, simply specify the corpus data modules you want to use. This training class allows the chatbot to be trained using data from the 1. List Trainer Class is a class with method called Train. This class allows a chatbot to be trained using a list of strings where the list represents a conversation. 2. In Train method, initialize a data structure to store the history of conversation. 3. Store each statement from the provided list to the initialized data structure 25 | P a g e dialog corpus. For our implementation, we used the Corpus Trainer Class of Chatterbot. At first, we have to create QA corpus in the data folder of the Chatterbot in the predefined format in JSON. So from the library, we set the trainer as training with the QA Corpus.We are providing the pseudo code of corpus trainer class based on the code by Gunther Cox for a better understanding of the trainer class. 4.3.1 STORAGE ADAPTERS Figure 4.6: Pseudocode for corpus trainer class of Chatterbot ChatterBot comes with built-in adapter classes that allow it to connect to different types of databases. For our implementation, we will be using the Json File Storage Adapter which is a simple storage adapter that stores data in a JSON formatted file on the hard disk. This functionality makes this storage adapter very good for testing and debugging. We will select the Json File Storage Adapter by specifying it in our chat bot’s constructor. The database parameter is used to specify the path to the database that the chat bot will use. The database.json file will be created automatically if it does not already exist. 1. Corpus Trainer Class is a class with a method called train. This class allows the chatbot to be trained using data from the Chatterbot dialog corpus. 2. Import the corpus of the language mentioned in the command for the chatterbot-corpus library 3. In Train method, initialize a data structure to store the history of conversation. 4. Check whether length of the corpus is larger than the capacity of the storage. If it is larger then return out of space error otherwise start training. 5. Store each statement from the provided list to the initialized data structure. 26 | P a g e ChatterBot’s input adapters are</s> |
<s>designed to allow a chat bot to have a versatile method of receiving or retrieving input from a given source. It is required to add in parameters to specify the input and output terminal adapter. The input terminal adapter simply reads the user’s input from the terminal. 4.3.2 INPUT ADAPTERS The Chatterbot’s input adapter class is an abstract class that represents the interface that all input adapters should implement. After getting input, the main job is the classify the text as a known or an unknown statement and pass it to the logic adapter after labeling the sentence as “known” or “unknown”. The goal of an input adapter is to get input from some source, and then to convert it into a format that ChatterBot can understand. This format is the Statement object found in ChatterBot’s conversation module [8]. We used the variable input adapter for the implementation of QA system. Variable input type adapter allows the chatbot to accept a number of different input types using the same adapter. This adapter accepts strings, dictionaries, and statements. 4.3.3 OUTPUT ADAPTERS The output adapter allows the chatbot to return a response in as a Statement object. It is a generic class that can be overridden by a subclass to provide extended functionality, such as delivering a response to an API endpoint. Since our system is a text-based system we chose the “Text” format for our chatbot [8]. 4.3.4 LOGIC ADAPTERS Logic adapters determine the logic for how ChatterBot selects responses to a given input statement. The logic adapter that your bot uses can be specified by setting the logic_adapters parameter to the import path of the logic adapter you want to use [8]. 27 | P a g e It is possible to enter any number of logic adapters for your bot to use. If multiple adapters are used, then the bot will return the response with the highest calculated confidence value. If multiple adapters return the same confidence, then the adapter that is entered into the list first will take priority. The logic_adapters parameter is a list of logic adapters. In ChatterBot, a logic adapter is a class that takes an input statement and returns a response to that statement. Figure 4.7: Pseudo code for the Best Match Logic Adapter We employ Best Match Adapter for our chatbot. It is a logic adapter that returns a response based on known responses to the closest matches to the input statement. The Best Match logic adapter selects a response based on the best known match to a given statement. Once it finds the closest match to the input statement, it uses another function to select one of the known responses to that statement. The best match adapter uses Jaccard Similarity function to compare the input statement to known statements. Jaccard Similarity compared two sentences based on Jaccard Index. The Jaccard index is a ratio of numerator and denominator or in other words, it is a fraction. In the numerator, we count the</s> |
<s>number of items that are shared between the sets. In the denominator, we 1. BestMatch logic adapter is a logic adapter that returns a response based on known responses to the closet matches to the input statement. 2. Import the Unicode literals and the logic adapter library. 3. In Get method, it takes a statement string and a list of statement strings and returns the closet matching statement from the list 4. It no statement have known responses then the Get method chooses random response to return. For random response, the Get method sets the confidence score as zero 5. For known statements(s), the Get method calculated the confidence score by doing “Jaccard Similarity” comparison and return the one with highest confidence score. 28 | P a g e count the total number of items across both sets. Let’s say we define sentences to be equivalent of 50% or more of their tokens are equivalent. Here are two sample sentences: The young cat is hungry. The cat is very hungry. When we parse these sentences to remove stop words, we end up with the following two sets: {young, cat, hungry} {cat, very, hungry} In our example above, our intersection is {cat, hungry}, which has a count of two. The union of the sets is {young, cat, very, hungry}, which has a count of four. Therefore, our Jaccard similarity index is two divided by four, or 50%. Given our threshold above, we would consider this to be a match. 4.3.5 RESPONSE SECTION METHOD Response selection methods determine which response should be used in the event that multiple responses are generated within a logic adapter. ChatterBot uses Statement objects to hold information about things that can be said. An important part of how a chat bot selects a response is based on its ability to compare two statements to each other. This module contains various text comparison algorithms designed to compare one statement to another. We use the get first response method for the selection of a response. This method takes the input statement and selects the statement in the knowledge base which closely matches the input to the chatbot from a list of statement options to choose a response from. 29 | P a g e 4.3.6 STATEMENT-RESPONSE RELATIONSHIP ChatterBot stores knowledge of conversations as statements. Each statement can have any number possible responses. Figure 4.8: The Relationship between Statement and responses Each Statement object has an in_response_to reference which links the statement to a number of other statements that it has been learned to be in response to. Figure 4.9: Mechanism of the reference to all parent statements of the current Statement The in_response_to attribute is essentially a reference to all parent statements of the current statement. The Response object’s occurrenceattribute indicates the number of times that the statement has been given as a response. This makes it possible for the chatbot to determine if a particular response is more commonly used than another. 30 | P a g e</s> |
<s>4.4 ENVIRONMENT SETUP Natural Language Processing (NLP) techniques such as Natural Language Toolkit (NLTK) for Python can be applied to analyze speech, and intelligent responses can be found by designing an engine to provide appropriate human-like responses[1]. For the setup of Bengali chatbot, we installed Python 3.6 in our Engine. Python is a high-level language which is suitable for scientific research. Figure 4.10: Official Unicode Consortium Code Chart of Bengali With availability a large resource of libraries for research purpose, it is the best choice for the natural language processing research. Our chatbot is based on a machine learning engine called Chatterbot which is powered by Python. So to run Chatterbot in our machine it is mandatory to install Python. 31 | P a g e Figure 4.11: Bengali Alphabet Python 3.6 is recommended for the implementation of Bengali chatbot because any other version below 3.6 of Python causes “Unicode Decode Error”. Unicode Decode Error is a runtime error caused by non-English language with a large number of letters in the alphabet. The Unicode range of Bengali is 0980–09FF. It has 11 vowels and 40 consonants. Unlike English, it has consonant conjuncts, modifier, and other graphemes. So Bengali cannot be dealt with ASCII decoding. For easy installation of Chatterbot, it is recommended to install Anaconda for the setup. Anaconda8 is an open source data science platform powered by Python. Only Python 3.6 and Anaconda 3 supports taking the input in Bengali from a database. So it is advisable to use Python 3.6 and Anaconda 3 for this purpose. There is required software to run Chatterbot in any engine. We installed chatterbot-corpus for the implementation of the English Chatbot. It is notable to mention that after implementation of model we add the Bangla corpus to the chatterbot corpus. So due to our contribution, anyone installing Chatterbot corpus will also get a sample Bengali corpus made by own. 32 | P a g e 4.5 IMPLETENTATION For the implementation of our QA system, we have to go through some steps sequentially. i. First of all we installed Linux operating system. ii. The required environmental setup to run the Chatterbot library has been done in our laboratory. iii. We installed the required softwares, libraries and toolkits as per the requirement. iv. After that, we tested some web chatbots to get familiar with the system. Then prepared the codes to and corpus for building up the knowledge base of the system. There is a particular format to input data in the JSON storage so we have to format the corpus in that format. v. We write a program which simulates our system. vi. Then we train the system with our own BENGALIDATASET. vii. After successful implementation of the system, we tested our system with different questions. Thus we have done the implementation of our QA system. The main focus of our work is to generate sentences free from grammatical mistakes, spelling mistakes and consistent. Our System achieves the goal of producing correct responses. We</s> |
<s>also integrated this system in a personal blog. In the process we made a small Bengali corpus. Since the responses are as good as its knowledge base so a lot of work has to be done to enhance the knowledge base. Topic-wise data can be fed to our system for the enhancement of its knowledge base. But during building the knowledge base, the developer must provide a knowledge base free from errors.33 | P a g e CHAPTER 5 SIMULATION AND RESULT In this chapter we have described the results of the whole experiments. This also included the description of the hardware, software, libraries used for getting results, amount of data used for training, accuracy and the system’s overall performance. 5.1 HARDWARE USED FOR SIMULATION: As the process of training the neural network is time consuming, so it’s requires the hardware which is more capable in computing with having high configuration. We have run our model on a machine with the following specification: i. Intel® Core™ i5-7200U CPU, 2.71GHz × 8 ii. 8 GB RAM iii. NVidia 940mx 2GB GPU As the configuration of our machine not so much good that’s why during the training session we can’t perform any other activity on my machine. Because the entire memory was occupied for training the system. Training the data took about 8 days. 5.2 SOFRWARE AND LIBRARIES USED FOR SIMULATION We have to install some software in our system to implement the QA system. The system has been implemented on LIUX operating system. The following softwares were required: 34 | P a g e i. Linux OS ii. Python 3 iii. PyCharm iv. Visual Studio v. Tensorflow GPU We also used some python libraries to get our system ready for the work i. Keras ii. Tensorflow iii. Conda iv. NLTK toolkit v. Chatterbot vi. Tokenizer 5.3 RESULT The difficulty of evaluation is intrinsic as each conversation is interactive, and the same conversation will not occur more than once; one slightly different answer will lead to a completely different conversation; moreover, there is no clear sense of when such a conversation is “complete” [23]. So for the evaluation, we decided to compare our system with previous existing chatbots. We have trained the system with REDDITDATASET which consists of 1000000+ question and answers. Training session was taking about 18days to train the system. The learning rate is .001. batch size is 1000. That means in each epoch the system will take 1000 question/answer to train the system. The report step is 20000. That means in every 8-reporting step the system completes its one epoch. 35 | P a g e After this primary training we made our own Bengali corpus consists of 1000 data. We divided the train and test data by 60 and 40 percent respectively. Then we trained the model with our corpus. The reporting step was 50. And we needed 20 epoch. The result shows the lost accuracy during training session. After the training we have examine out system with</s> |
<s>different types of questions. In case of generating output there are two things, one is understanding the question from input of the user. And another one is giving answer from system learning as trained. Our system can successfully give answer to any question but not in Bengali. After successfully understanding the question the system is giving answers very fast. But sometime in case of answering, the system was confused. It is taking inputs in Bengali but giving answers in English. We implemented tokenizer for machine translation but didn.t get any improvement. This can be a work for the future research. The result for question answering is given below. 5.3.1 SCREENSHOTS OF THE TEST RESULT Figure 5.1: Snapshot of the response 36 | P a g e Figure 5.2: Snapshot of the response 37 | P a g e Figure 5.3: Snapshot of the response 38 | P a g e Figure 5.4: Snapshot of the response 39 | P a g e Figure 5.5: Snapshot of the response 40 | P a g e Figure 5.6: Snapshot of the response 41 | P a g e Figure 5.7: Snapshot of the response 42 | P a g e 5.4 ANALYSIS Since we do not have more time to modify our model to perform better, we have to get satisfied with our current performance. Our work addresses the problem of developing a QA system in spite of required language processing tools like Parts Speech Tagger, Tokenizer etc. We solve the problem of lack of required tools by selecting a language independent platform and choosing a retrieval based model to serve the purpose. So we are left with an option of comparing it with any English chatbot. Therefore, we compare QAs with two popular chatbots which are Neural Conversational Model (NCM) and the Cleverbot. To ensure a fair comparison, the questions asked exact questions asked in both cases. NCM is a neural based open domain generative based chatbot where ours is a retrieval based closed domain one. In the experiments, our chatbot gives the similar response as the NCM. Cleverbot is a chatbot hosted on a website who learns from the user and answers based on the conversation history which is quite similar to our work. It is interesting to observe that our system outwitted Cleverbot in many cases. We examine QAs by inputting unknown sentences as a test case and find QAs to produce random answers to the questions. But it stores the reply given by the user to the unknown sentences and later on replies the same answer to another user in another instance. QAs is able to reply in real time like others. Since it can take input in English and can give a response, so we can say that the pattern matching algorithm is functioning well. Our chatbot replies in syntactically correct and it is free from spelling mistakes and any sort of grammatical mistakes. It makes some punctuation mistake which can be improved in future. From the</s> |
<s>samples, we can see our QAs gives a similar reply like Neural Conversational Machine (NCM) whereas in comparison with a related work Cleverbot our QAs has outwitted it in most of the instance. Amongst the many limitations, the lack of a coherent personality makes it difficult for our system to pass the Turing test [15]. Our work provides us a conversation corpus. This generation of the corpus has many advantages. Corpus is considered as a basic resource for language analysis and research for many foreign languages. This reflects both ideological and technological change in the area of language 43 | P a g e research. This change is probably caused due to the introduction of computer and corpus in linguistic research which, as a result, have paved out many new applications of language (and linguistics) in the fields of communication and information exchange. This corpus can be useful for producing many sophisticated automatic tools and systems, besides being good resources for language description and theory making. This QA system can automatically answer question asked by the user using deep learning technique and give answers to the question. We have trained our model with lots of questions then we ask the system to get the result. It can outcome the question with expected result. Though in some cases the prediction goes little bit wrong but it can be reduced by fine tuning the system in near future with other datasets available. 44 | P a g e CHAPTER 6 CONCLUSION AND FUTURE WORK 6.1 CONCLUSION QA systems have the ability to model natural language and establish a conversation with a user through a question/answer protocol. There are three types of approaches depending on the freedom they have at the time of answering: rule-based, retrieval-based and generative based. The two first approaches are the most used nowadays due to its effectiveness at the time of maintaining a close-domain conversation. The generative-based models, on the other hand, arise as a powerful alternative in the sense that they can handle better an open topic conversation. They are much related to the idea of strong AI, no human intervention at the time of answering, everything is learned by the machine. Promising results have been achieved in generative-based chatbot models by applying neural translation techniques with encoder/decoder architectures. In this thesis, it has been shown that chatbot models based on encoder/decoder architectures using exclusively attention outperforms neural network models. It is important to mention that all models shown in this project shape and mimic natural human language but do not apply any logic to their answers. That is why most of the answers are not coherent between them and the final model lacks of a "personality". The main focus of our work is to generate sentences free from grammatical mistakes, spelling mistakes and consistent. Our System achieves the goal of producing correct responses. Since the responses are not good as its knowledge base so a lot of work has to be done to enhance the knowledge</s> |
<s>base. Topic-wise data can be fed to our system for the enhancement of its knowledge base. But during building the knowledge base, the developer must provide a knowledge base free from errors. 45 | P a g e 6.2 LIMITATIONS QA systems can enable users to access the knowledge in a natural way by asking natural language questions and get back relevant correct answers. The major challenges in QASs are: understanding natural language questions regardless of their types or representation; understanding knowledge derived from the structured, semi structured, un-structured to semantic web and searching for the relevant, correct and concise answers that can satisfy the information needs of users. We face some limitations in our system. It is answering the questions in English where as it should have answered in Bengali. 6.3 FUTURE WORK There are some huge scope of improvement of the system. For example, making a larger Bengali Corpus and train the model with that corpus. For future work, one aspect to enhance the knowledge base of this system is to host it on a crowdsourcing platform. Like Google Translate Bengali, QA will be able to learn from the interactions with users. The more the number of interactions, the more will be the percentage of relevant replies to a query can be provided by QA system. In future, we can train a chatbot with a neural network model provided with the Bengali corpus when available. Also, we can try a crowd-sourced model to enrich the database of the chatbot by integrating this chatbot in a website. The context can be incorporated in future work. To produce sensible responses, systems may need to incorporate both linguistic context and physical context. In long conversations, people keep track of what has been said and what information has been exchanged. Since it requires a large collection of conversation corpus to make a generative open domain system to incorporate context. As we do not have that data to make it generative so this can be done in future. We can optimize the automated system by making the chatbot a voice-enabled system, to reply in pictorial representation for better understanding for people with low literacy like “Sophia”. 46 | P a g e REFERENCES [1] Abdul-Kader, S. A., & Woods, J. (2015). Survey on chatbot design techniques in speech conversation systems. Int. J. Adv. Comput. Sci. Appl.(IJACSA), 6(7). [2] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. [3] Beech, H. (2014). What's All The Fuss About Whats App? China's WeChat Is a Worthy Rival. Time. com, 2, 1. [4] Berger A, Caruana R, Cohn D, Freitag D, and Mittal V. Bridging the lexical chasm: statistical approaches to answer-finding. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, 2000, pp. 192-199. [5] Colby, K. 1999a. Comments on the human-computer conversation. In Wilks, Y. (ed.), Machine Conversations. Kluwer,</s> |
<s>Boston/Drdrecht/London, pp. 5-8. [6] Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short text conversations. In EMNLP, pages 935–945. [7] Greenwood M. and Gaizauskas R. Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering. In Proceedings of the Workshop on Natural Language Processing for Question Answering (EACL03), 2003, pp. 29-34. [8] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona based neural conversation model. arXiv preprint arXiv:1603.06155 [9] Islam, Z., Mehler, A., Rahman, M. R., & Text technology, A. G. (2012, November). Text Readability Classification of Textbooks of a Low-Resource Language. In PACLIC (pp. 545-553) [10] Kwok C, Etzioni O and Weld DS. Scaling question answering to the Web. ACM Transactions on Information Systems (TOIS), Vol.19(3), 2001, pp. 242-262. [11] M. J. Pereira, and L. Coheur, “Just. Chat-a platform for processing information to be used in chatbots,” 2013. [12] Meeng, M., & Knobbe, A. (2011). Flexible enrichment with Cortana–software demo. In Proceedings of BeneLearn (pp. 117-119). [13] Moschitti A. Answer filtering via text categorization in question answering systems. In Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence, 2003, pp. 241-248. [14] Processing and Information Systems, Springer Berlin Heidelberg, 2002, pp. 235-239. [15] Ravichandran D and Hovy E. Learning surface text patterns for a question answering system. In proceeding of 40th Annual Meeting on Association of Computational Linguistics, 2002, pp. 41-47. 47 | P a g e [16] Ritter, A., Cherry, C., & Dolan, B. (2010, June). Unsupervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 172-180). Association for Computational Linguistics. [17] Saenz, A. (2010). Cleverbot chat engine is learning from the internet to talk like a human. Singularity Hub [18] Shawar, B.A. and Atwell, E. (2007) Chatbots: are they really useful? Journal of Computational Linguistics and Language Technology, Vol. 22, No. 1, pp.29-49. [19] Shang, L.; Lu, Z.; and Li, H. 2015. Neural responding machine for short-text conversation. In ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, 1577–1586. [20] Stent, A., & Bangalore, S. (Eds.). (2014). Natural language generation in interactive systems. Cambridge University Press. [21] Shawar, B. A., & Atwell, E. (2007, April). Different measurements metrics to evaluate a chatbot system. In Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies (pp. 89-96). Association for Computational Linguistics. [22] Sneiders E. Automated question answering using question templates that cover the conceptual model of the database. In Natural Language [23] Soubbotin MM and Soubbotin SM. Patterns of Potential Answer Expressions as Clues to the Right Answer. In Proceeding of the TREC-10, NIST, 2001, pp. 175-182. [24] Steve Young, Milica Gasiˇ c, Simon ´ Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150–174. [25] Turing, A.</s> |
<s>M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. [26] Vinyals, O., & Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869. [27] Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. [28] Wilks, Y. (1999). Preface. In Wilks, Y., editor, Machine Conversations, pages vii–x. Kluwer, Boston/-Dordrecht/London [29] Wu, Y., Wu, W., Li, Z., & Zhou, M. (2016). Response Selection with Topic Clues for Retrieval-based Chatbots. arXiv preprint arXiv:1605.00090. [30] Yu, Z., Xu, Z., Black, A., & Rudnicky, A. (2016). Chatbot evaluation and database expansion via crowdsourcing. In Proc. of the chatbot workshop of LREC. 48 | P a g e [31] Zhang D and Lee WS. Web based pattern mining and matching approach to question answering. In Proceedings of the 11th Text REtrieval Conference, 2002. [32] Zhou, X., Dong, D., Wu, H., Zhao, S., Yan, R., Yu, D., ... & Tian, H. (2016). Multi-view response selection for human-computer conversation. EMNLP’16. [33] Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988 49 | P a g e APPENDIX A In Appendix A, the code of our system has been shown. The codes has been uploaded in a GIT repository. The following link will guide you to the repo: https://github.com/sakifabir/chatbot. Figure A.1: Snapshot of the chat.py file 50 | P a g e Figure A.2: Snapshot of the chat.py file 51 | P a g e Figure A.3: Snapshot of the chat.py file 52 | P a g e Figure A.4: Snapshot of the chat.py file 53 | P a g e Figure A.5: Snapshot of the chat.py file 54 | P a g e Figure A.6: Snapshot of the chat.py file 55 | P a g e Figure A.7: Snapshot of the chat.py file 56 | P a g e Figure A.8: Snapshot of the model.py file 57 | P a g e Figure A.9: Snapshot of the model.py file 58 | P a g e Figure A.10: Snapshot of the model.py file 59 | P a g e Figure A.11: Snapshot of the model.py file 60 | P a g e Figure A.12: Snapshot of the model.py file 61 | P a g e Figure A.13: Snapshot of the model.py file 62 | P a g e Figure A.14: Snapshot of the utils.py file 63 | P a g e Figure A.15: Snapshot of the utils.py file 64 | P a g e Figure A.16: Snapshot of the utils.py file 65 | P a g e Figure A.17: Snapshot of the utils.py file 66 | P a g e Figure A.18: Snapshot of the train.py file 67 | P a g e Figure A.19: Snapshot of the train.py file 68 | P a g e Figure A.20: Snapshot of the train.py file 69 | P a g e Figure A.21: Snapshot of the train.py file 70 | P a g e APENDIX B In</s> |
<s>Appendix B, we have shown some snapshots of our dataset. Figure B.1: Snapshot of the Dataset 71 | P a g e Figure B.2: Snapshot of the Dataset Figure B.3: Snapshot of the Dataset 72 | P a g e Figure B.4: Snapshot of the Dataset Figure B.5: Snapshot of the Dataset 73 | P a g e Figure B.6: Snapshot of the Dataset Figure B.7: Snapshot of the Dataset 74 | P a g e Figure B.8: Snapshot of the Dataset Figure B.9: Snapshot of the Dataset View publication statsView publication statshttps://www.researchgate.net/publication/340351877</s> |
<s>untitledDetection of Semantic Errors from Simple Bangla Sentences K. M. Azharul Hasan, Muhammad Hozaifa Computer Science and Engineering Department Khulna University of Engineering & Technology Khulna 9203, Bangladesh. azhasan@gmail.com, hozaifa.moaj@gmail.com Sanjoy Dutta Computer Science and Engineering Department Khulna University of Engineering & Technology Khulna 9203, Bangladesh. dsanjoy58@live.comAbstract—We describe a methodology to detect semantic errors from Bangla sentences. According to Bangla grammar, a single verb can have many forms depending on its tense and person of its subject. The subject of a sentence can be noun or pronoun, may indicate human, animal, or any non-living entity. There is a fixed semantic relation between every verb and subject and object of a sentence. For example, a non-living entity can never feel hungry but living entity feels. This semantic difference checking for its correctness in a language is very important for the purpose of machine learning study and intelligent agent development for human computer interaction. Semantic error detection for Bangla language is an important research problem because of the variety that Bangla language offers in its grammatical, structural and semantic diversity. In this paper, we have established the relationship between subject and verb as well as object and verb of Bangla sentence. Hence we have proposed an algorithm for semantic correctness of simple Bangle sentences. The algorithm can easily be extended for other forms such as complex and compound sentences Keywords—Bangla Language Processing, Bangla Grammer, Semantic analysis, Bangla Simple structure. I. INTRODUCTION The word semantics expresses a range of ideas, from the language to the highly technical. It is often used in ordinary language for denoting a problem of understanding that comes down to word selection or connotation[1]. In linguistics, semantics is that branch of linguistics that deals with the study of meaning, changes in meaning, and the principles that govern the relationship between sentences or words and their meanings [2]. Sounds, facial expressions, body languages also have semantic (meaningful) content and comprises several branches of study. So study of semantic detection of language enables the communities to interpret and perform specific tasks if the orders, sentence or symbols are meaningful. Semantic error detection is a challenging task for resource constrained language like Bangla. But The research on semantic correctness checking is very important for the purpose of machine learning, opinion mining and intelligent agent development for human computer interaction. There have been some of researches on Bangla language like Bangla grammar detection[3]-[5], opinion mining or sentiment detection from Bangla text[6][7], Bangla Character recognition[8] development, English to Bangla and Bangla to English Translation[9] and Bangla Text to Speech and Speech to Text Synthesis[10][11]. But the research on Semantic detection from Bangla text is still inferior because of the absence of standard corpus of Bangla words[12]. The semantic of a Bangla sentence basically depends on the verb(s) used in the sentence. In this paper, we consider simple sentences having single verb of the form Subject +Object + Verb (SOV). In the SOV form, the relation of the verb with subject and object is of</s> |
<s>two fold; 1. Whether the verb with the subject has a well formed structure with semantic compatibility (SV relation) 2. Whether the Object and Verb (OV relation) has semantic compatibility. To establish these two relations, we have created classification table of Bangla verbs on the basis of tense, person and also a classification table of noun on the basis of person, species, gender etc. Hence we have established the subject verb relation and Object Verb relation to check the semantic correctness of both SV and OV relation. II. SEMANTIC ERROR DETECTION FROM SIMPLE BANGLA SENTENCES Detecting semantic errors difficult to treat and needs a lot of preprocessing works such as constructing semantic knowledge base and automatic error-detection algorithm based on this knowledge base. To construct this semantic knowledge base, we have formulated the problem in two main parts namely categorization and Relationship Validation and Acceptance Checking. A. Categorization Categorization implies that objects are grouped into categories, usually for some specific purpose. Categorization is fundamental in language, prediction, inference, decision making and in all kinds of environmental interaction [13]. We have categorized the words so that each of the words in the same category posses same semantic relationship with other entity. We chose a Bengali sentence consisting of complete basic structure of the form Subject + Object + Verb. For example: মানষু (sub) ভাত (object) খায় (verb)। েস (sub) ভাত (object) খােব (verb) । 17th Int'l Conf. on Computer and Information Technology, 22-23 December 2014, Daffodil International University, Dhaka, Bangladesh 978-1-4799-6288-4/14/$31.00 ©2014 IEEE 296 কাক (sub) আকােশ (object) uেড়(verb) । These sentences are broken into subject, object and verb parts for the purpose of categorization. We have prepared a categorization table for the purpose of identifying class of verbs, class of subjects from living being and their inter-relationships based on social impacts and use of them. Table1 and Table 2 show some examples of categorization for nouns and verbs for Bangla. TABLE I. TABLE OF NOUN CATEGORIZATION Category Members মানষু মানষু রিহম কিরম রিব আিম তুিম েস গর ু গর,ু গাভী পািখ পািখ কাক বাবiু চড়ুi মাছ মাছ iিলশ রiু কাতলা বাঘ বাঘ বািঘনী TABLE II. TABLE OF VERB CATEGORIZATION Category Members uেড় uড়া uেড়িছল uড়েব uড়ল uিড় uেড় ভােস ভাসিছল ভাসেব ভাসল ভািস ভােস খায় খােব েখল খাiেব েখেয়িছল খাi খায় গায় গায় গাiেব গািcল গাiেব গাiেবচালায় চালায় চেল চলেছ চলেব B. Relationship Validation and Acceptance Checking From noun and verb categorization we categorize the subject and verb into a more general class and hence we develop the relationship. Based on the categorization of verbs and nouns, we developed a relationship for subject and verb as well as verb and object of the sentence. Definition 1 (SV relation): If there is a well-established semantic bond between a subject (noun) of the sentence and verb then there is a true SV relation between the subject and verb otherwise the relation is false. For example “পািখ uেড়” has a true SV relation and “মাছ uেড়” has false SV relation. Definition 2 (OV</s> |
<s>relation): If there is a well-established semantic bond between an Object of the sentence and verb then there is a true OV relation between the object and verb otherwise the relation is false. OV relation is true when there is a true SV relation in that sentence. For example “মানষু ভাত খায়” has a true OV relation and “মানষু ঘাস খায়” has false OV relation. Using the SV relationship, we have created a Validation Table (VT) to check the semantic acceptance. The entries of VT are a Boolean relationship True (T) or False (F). If there is false SV relation between subject and verb then the entry is F otherwise the entry is T. If the entry is T then it has one more entry which indicates the OV relation because OV relation is established if there is a true SV relation. If the VT entry is true then the corresponding OV indicates a set for which the SV is true. TABLE III. VALIDATION TABLE OF BOOLEAN RELATIONSHIP BETWEEN SUBJECT AND VERB Verb Subject মানষু গরু পািখ মাছ বাঘখায় T/S11 T/S12 T/S13 T/S14 T/S15 uেড় F F T/S23 F F ভােস F F F T/S34 F সাঁতরায় T/S41 T/S42 T/S43 T/S44 T/S45 গায় T/S51 F T/S53 F F কের T/S61 F T/S63 T/S64 T/S65 কােট T/S71 T/S72 F F F পেড় T/S81 F F F F েদেখ T/S91 T/S92 T/S93 T/S94 T/S95 jালায় T/S101 F F F F েখেল T/S111 F F F F বেল T/S121 F F F F চালায় T/S131 F F F F হয় T/S141 F F F T/S145 রােখ T/S151 F F F F িশেখ T/S161 F F F F Table 3 shows a sample VT and Table 4 shows a sample SV set. We check the semantic acceptance of a sentence by checking whether there is a true SV relation; i.e. if the corresponding entry in VT is T. If there is a valid relationship between subject and verb then the set on the OV table is checked and if the object is a member of the set then the sentence is semantically correct otherwise incorrect. TABLE IV. TABLE OF VERB CATEGORIZATION S11 {গর ুপািখ ফল ভাত মরুগী... etc } S12 {ঘাস ভাত... etc} S13 {েপাকামাকড় েকঁেচা ... etc} S161 {েকারআন,গান ... etc} For example “মানষু ভাত খায়” (man eats rice). Here “মানষু” (man) is subject and “খায়” (eat) is verb. Now we check the relationship from VT (Table 3) and find there is a True relation between subject (man) and rice (verb) and indicate OV (Table 4) to S11. As we see ভাত (rice) is member of S11 and hence the sentence is semantically correct. Similarly “গর ুভাত খায়” (“Cow eats rice”) will be semantically incorrect sentence because OV relation is false. So any sentence displaying relationship which is illogical or irrational will indicate semantically incorrect and therefore 17th Int'l Conf. on Computer and Information Technology, 22-23 December 2014, Daffodil International University, Dhaka, Bangladesh 978-1-4799-6288-4/14/$31.00 ©2014 IEEE 297will not be acceptable by the framework</s> |