|
{ |
|
"paper_id": "2019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:29:50.741861Z" |
|
}, |
|
"title": "Samajh-Boojh: A Reading Comprehension System in Hindi", |
|
"authors": [ |
|
{ |
|
"first": "Shalaka", |
|
"middle": [], |
|
"last": "Vaidya", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hiranmai", |
|
"middle": [], |
|
"last": "Sri Adibhatla", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "hiranmai.sri@research.iiit.ac.in" |
|
}, |
|
{ |
|
"first": "Radhika", |
|
"middle": [], |
|
"last": "Mamidi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "radhika.mamidi@iiit.ac.in" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a novel approach designed to answer questions on a reading comprehension passage. It is an end-toend system which first focuses on comprehending the given passage wherein it converts unstructured passage into a structured data and later proceeds to answer the questions related to the passage using solely the aforementioned structured data. To the best of our knowledge, the proposed model is first of its kind which accounts for entire process of comprehending the passage and then answering the questions associated with the passage. The comprehension stage converts the passage into a Discourse Collection that comprises of the relation shared amongst logical sentences in given passage along with the key characteristics of each sentence. This model has its applications in academic domain and query comprehension in speech systems among others.", |
|
"pdf_parse": { |
|
"paper_id": "2019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a novel approach designed to answer questions on a reading comprehension passage. It is an end-toend system which first focuses on comprehending the given passage wherein it converts unstructured passage into a structured data and later proceeds to answer the questions related to the passage using solely the aforementioned structured data. To the best of our knowledge, the proposed model is first of its kind which accounts for entire process of comprehending the passage and then answering the questions associated with the passage. The comprehension stage converts the passage into a Discourse Collection that comprises of the relation shared amongst logical sentences in given passage along with the key characteristics of each sentence. This model has its applications in academic domain and query comprehension in speech systems among others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The Samajh-Boojh 1 system which we have built focuses on the basic principles behind utilization of rules in order to capture the semantics of the given passage which is in the Devanagari script. The current trend is towards incorporating machine learning in the question answering models but they come with a downside of requiring huge quantity and variety of training data to achieve decent accuracy. Whereas the proposed model is rule-based and hence eliminates the need for extensive training data while still providing 75% accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Samajh-Boojh system answers 11 types of questions (Table 1) using approximately 25 * equal contribution 1 Samajh-Understanding and Boojh-Analysis, which translates to reading comprehension in Hindi.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 63, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "rules. This sheds light on the fact that, with substantially less number of rules a wide range of questions can be answered. It is an extension to Prashnottar model (Sahu et al., 2012) which could handle 4 types of questions using 4 rules. The system can be classified into two parts, the comprehension part and the question answering part, these two parts together ensure that the system behaves similar to the way humans approach the questions which are asked based on reading comprehension passage. The comprehension part of the system converts the given passage whose inherent structure cannot be grasped by the machine to a structured and machine extractable data called Discourse Collection. The Discourse Collection is then sent to the QA system as an input along with the query to obtain the relevant answer. This feature sets the proposed model apart from the commonly used information retrieval and extraction based techniques.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 184, |
|
"text": "(Sahu et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Panchatantra collection 2 comprising of 65 short stories was used to experiment on the model. This dataset had variety of stories with different lengths. The questions on each of these stories were framed by multiple annotators and best of which were picked to validate the system. The answers given by the annotators were used as gold data to measure the quality of the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Samajh-Boojh System is broadly classified into two parts: comprehension part and question answering part. The system works similar to human approach of answering reading comprehension questions. The system takes passage and queries corresponding to the passage as the input, and returns answers to the questions. In subsequent sections we dwell deeper into these parts and see how they function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture and Design", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The reading comprehension part of the system is responsible for structuring the story into a Discourse Collection which contains the characteristics of the story. This structure is inspired by Thorndyke's Story Grammar (Thorndyke, 1977) . Discourse Collection is comprised of the following components:", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 236, |
|
"text": "(Thorndyke, 1977)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Episode_Id: Unique key associated to the sentence. Its incrementally assigned as we parse the sentences. In Discourse Collection, we associate each logical sentence as an episode.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Original_sentence: The WX version of the logical sentence found in the story.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Time: The time setting in which the episode took place. If the Original_Sentence doesn't specify the time, this field is populated from the previous episode's Time value. Default is 'tbd: to be decided' Location: The place in which the episode took place. If the Original_Sentence doesn't specify the location, this field is populated from the previous episode's Location value. Default is 'tbd: to be decided' Karta: The karta (doer) of the logical sentence is given as the value. This is obtained from the dependency parser 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Karta_Adpos: The Adpos (adjective and prepositions) associated with the Karta to frame the answers during the questionanswering stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Karma: The karma of the logical sentence is populated in this column.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Karma_Adpos: The Adpos (adjective and prepositions) associated with the Karma to frame the answers during the questionanswering stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Anaphora_Resolved_Sentences: The logical sentence in which the anaphora is replaced with noun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Root_Node_Sentences: The words of the logical sentences are replaced with their roots. For this we used the shallow parser 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Given: The sentence which is related to the current sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "New: The current sentence which is having the Given sentence as a prerequisite.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Parser_Output: The output of the dependency parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The overview of the Reading Comprehension system is seen in Figure 1 . The passage is given as the input to Logical Sentence Module to break it into logical sentences, the split passage is given as input to the Discourse Generator module which contains Graph Maker Module, Anaphora Resolution Module, Root Node Resolution Module and Discourse Information Filler Module. The final output of these four modules is the Discourse Collection which is the output of comprehension system. The individual modules of the reading comprehension system along with detailed working is explained in the forthcoming sections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 68, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Part", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Even though the passage can easily be split into words, sentences and paragraphs when given as the input, it's a challenge to extract the semantics. We break the sentences based on the generic punctuation marks such as full stop, comma, semicolon, question mark, exclamation mark and conjunctions such as \u0914\u0930, \u093f\u0915, \u092a\u0930, \u0915\u0930, \u093f\u092b\u0930, \u0907\u0938\u0940\u093f\u0932\u090f, \u0924\u092c, \u0924\u094b, \u0915\u094d\u092f\u094b\u0902\u093f\u0915, \u0915\u094d\u092f\u0942 \u0902 \u093f\u0915, \u0932\u0947 \u093f\u0915\u0928, \u092a\u0930\u0902 \u0924\u0941 , \u093f\u0915\u0928\u094d\u0924\u0941 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Sentences Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When splitting the sentences by the split words, we noticed the tags were improper at some instances. Example: S1:\u0930\u093e\u092e \u0918\u0930 \u091c\u093e\u0928\u093e \u091a\u093e\u0939\u0924\u093e \u0925\u093e \u092a\u0930 \u0928\u0939\u0940\u0902 \u091c\u093e \u092a\u093e\u092f\u093e\u0964 T1: Ram wanted to go home but couldn't go. S2:\u0930\u093e\u092e \u0918\u094b\u095c\u0947 \u092a\u0930 \u092c\u0948 \u0920\u093e \u0925\u093e\u0964 T2: Ram sat on the horse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Sentences Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here the word \u092a\u0930 translates to 'but' and 'on', we want to split the sentence into two parts only in S1 and not in S2. The POS tags using Dependency Parser 3 give 'PSP' tag, hence making it difficult to differentiate on which \u092a\u0930 to split the sentences. To resolve this issue we decided to see the context of the given split word and then make the decision. The logic for deciding whether to split or not is given If the word before the split word is verb ie. having POS tag as 'VM' or 'VAUX' then we split the sentence into two parts and populate the Discourse Collection with two episodes each containing the split sentences as Origi-nal_Sentences(OS). In the Figure 2 , we see in the sample passage that apart from the full stops, the sentences are split at 'taba/\u0924\u092c' and 'para/\u092a\u0930' resulting in 5 episodes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 668, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Logical Sentences Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Original_Sentence which was populated in Logical Sentence Module is sent through the dependency parser 3 and the parser output is stored in the Discourse Collection of the corresponding episode. From the parser output, if there exits any k7t relation, it is stored in the 'Time' slot of the episode. If there exists a k7p relation, it is stored in the 'Location' slot of the episode. The word with k1 relation is stored in the 'Karta' slot of the episode along with the 'lwg__psp' as case marker of the Karta and word with relation 'nmod__adj' as the Karta adjective, the case marker and adjective with Karta word are called Karta_adpos and stored in the corresponding episode. The word with k2 relation is stored in Karma slot of the episode. The child nodes of the Karma in dependency tree who have the relations 'lwg__psp' and 'nmod__adj' are stored as Karma_Adpos.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Maker", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the Original_Sentence from the episode to resolve the anaphora and store it in Anaphora_Resolved_Sentence(ARS) of the corresponding episode. We used the algorithm mentioned in Dakwale(2014) to resolve the anaphora. This algorithm is a right fit as it uses rules from the dependency parser 3 . We see in Figure 2 that the the word 'vaha/\u0935\u0939' translates into rAma in episode 2 and billI in episode 4 based on the context, 'usE/\u0909\u0938\u0947 ' and 'vO/\u0935\u094b' are resolved into rAma in episode 2 and 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 196, |
|
"text": "Dakwale(2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 318, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Anaphora Resolution Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The root of a word is important when we are comparing two sentences. We convert each word in Anaphora_Resolved_Sentence of the episode into its root form and store it as Root_Node_Sentence(RNS) for corresponding episode. We used the IIIT Parser 4 and the output was parsed through the SSF format mentioned in Bharati(2007) and the root form of the words were extracted. In episode 2 the word 'gaya' changes to 'ja' in Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 322, |
|
"text": "Bharati(2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 418, |
|
"end": 426, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Root Node Resolution Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Discourse is when we look beyond the scope of a sentence and use information between their relation. Here we fill the 'Given' and 'New' values of the episode. The default values are 'tbd: to be decided'. If the passage sentence has split words mentioned in section 2.1, the sentence is split into two episodes such that, the second episode will contain the first split sentence as 'Given' in its Discourse Collection and second split sentence as the 'New'. Our assumption and complexity is limited to identifying a co-dependency between two sentences if they are separated by split words. The output of this stage will give the Discourse Collection. Episode 2 in the Figure 2 , has 'Given' and 'New' values populated since it has the word 'taba/\u0924\u092c' (translation: then).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 667, |
|
"end": 675, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discourse Information Filler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "rAma Eka acchA ladkA thA. vaha Eka dina pAThaSAlA jA rahA thA taba usE Eka billI dikhI. rAma usakE pAsa gayA para vaha bhAga gaI aura vO dukhI hO gayA Discourse Collection: { \"0\": { \"OS\": \"rAma Eka acchA ladkA thA\", \"karta\": \"rAma\", \"kartaadj\": [\"rAma\", \"acchA\"], \"ARS\": \"rAma Eka acchA ladkA thA\", \"RNS\": \"rAma Eka acchA ladka thA\" } \"1\": { \"OS\": \" vaha Eka dina pAThaSAlA jA rahA thA\", \"time\": \"din\", \"location\": \"pAThaSAlA\", \"karta\": \"rAma\", \"ARS\": \"rAma Eka dina pAThaSAlA jA rahA thA\", \"RNS\": \"rAma Eka dina pAThaSAlA jA rahA thA\" } \"2\": { \"OS\": \"usE Eka billI dikhI\", \"time\": \"din\", \"location\": \" pAThaSAlA\", \"karta\": \"billI\", \"given\": \" rAma Eka dina pAThaSAlA jA rahA thA\", \"new\": \"rAma Eka billI dikhI\", \"ARS\": \"rAma Eka billI dikhI\", \"RNS\": \"rAma Eka billI dikha\" } \"3\": { \"OS\": \" rAma billI pAsa gayA\", \"time\": \"din\", \"location\": \" billI pAsa\", \"karta\":\"rAma\", \"ARS\": \"rAma billI pAsa gayA\", \"RNS\": \"rAma billI pAsa jA\" } \"4\": { \"OS\": \"vaha bhAga gaI\", \"time\": \"din\", \"location\": \" billI pAsa\", \"karta\": \"rAma\", \"given\": \" rAma billI pAsa gayA\", \"new\": \"billI bhAga gaI\", \"ARS\": \"billI bhAga gaI\", \"RNS\": \"billI bhAga jA\" } \"5\": { \"OS\": \"vO dukhI hO gayA\", \"time\": \"din\", \"location\": \" billI pAsa\", \"karta\": \"rAma\", \"given\": \"billI bhAga gaI\", \"new\": \"rAma dukhI hO gayA\", \"ARS\": \"rAma dukhI hO gayA\", \"RNS\": \"rAma dukhI hO jA\" } }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The passage and its corresponding discourse collection is shown here. Only the populated values are shown, rest all are 'tbd-to be decided' except for parser_output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 2: Discourse Collection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "OS-original_sentence, ARS-Anaphora_Re-solved_Sentence, RNS-Root_Node_Sentence ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 2: Discourse Collection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Question-Answering part of this model takes Discourse Collection which was output of the Reading Comprehension part, as the input along with the query related to the passage and returns the answer as the output. The brief overview of the system is shown in Figure 3. The query is fed in the Devanagari format and the answer is given in the same. The working of this system is seen in following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 267, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Answering Part", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "This module takes the Devanagari format of the input query and returns the type of the question along with key word relevant to the query. This format is similar to that found in QLL (Vargas-Vera et al., 2003) . We tag the questions based on the question words into 11 major categories shown in Table 1. This list can be expanded as per the required answers from the question word. For example, Q1: \u0917\u093e\u0902 \u0935 \u092e\u0947\u0902 \u093f\u0915\u0924\u0928\u0947 \u092e\u0941 \u0917\u0947 \u0930\u094d \u0930\u0939\u0924\u0947 \u0925\u0947 ? T1: How many chickens were there in the village?", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 209, |
|
"text": "(Vargas-Vera et al., 2003)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Analyzer Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The answer expects the quantity of the chickens. So, we place it into 'Intf' category", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Analyzer Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We now see each type of question in detail and see how they are handled: Karta: It involves the question words which requires the answer as the doer. \u093f\u0915\u0938\u0928\u0947 \u092c\u0902 \u0926\u0930 \u0915\u094b Here, the answer is expected to be the consequences of cat being troubled. New Known: \u093f\u092c\u0932\u094d\u0940 \u092a\u0930\u0947 \u0936\u093e\u0928 \u0915\u094d\u092f\u094b\u0902 \u0939\u094b \u0917\u092f\u0940? (Why was the cat angry?), new info from the question: \u093f\u092c\u0932\u094d\u0940 \u092a\u0930\u0947 \u0936\u093e\u0928 \u0939\u0941 \u0908 (cat is angry) Here, the answer is expected to address the reasons why the cat was angry. The output of the question analyzer module for above mentioned question types is shown in Figure 2 . There is a preference order given to these question types, in cases of when two words belonging to two different classes (in Table 1 ) exist in same query. The observed overlaps in-clude: Time and Kya: in this case the question type will be treated as Kya. Kya and Given-New: in case of this overlap, the question type will be treated as Kya.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 532, |
|
"end": 540, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 668, |
|
"end": 675, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Analyzer Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The input to this module is the output of the Query Analyzer module (Table 2) and Discourse Collection (Figure 2 ) which is the output of the Reading Comprehension stage of our system. This is seen clearly in Figure 3 . The episode is detected by using Jaccard similarity between the Query(Q) and Root_Node_Sentence(RNS) whose formula is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 77, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 112, |
|
"text": "(Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 218, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Jaccard_Sim(Q, RN S) = n(Q \u2229 RN S) n(Q \u222a RN S)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Where, n(Q \u2229 RN S) is number of common words in the Query and Root_Node_Sentence and n(Q \u222a RN S) is the total number of words in Query and Root_Node_Sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 1 Weighted Jaccard Similarity We have modified the formula to give better results. The formula is our version of weighted Jaccard similarity, wherein, we take the POS tags of the words which are common between the Query and the Root_Node_Sentence, and give more weightage to the word if it is less frequent, rather than focusing on frequently occurring words, which don't capture the similarity between the Query and Root_Node_Sentence such as prepositions. We have given the priority to rare words based on their POS word tags. Priorities of POS tags as given as Adjective/Adverb > Verb > Noun > Auxillary Verb > Others. The respective POS tags from the parser are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "procedure JaccardSim(Q, RN S) N ounT ags \u2190 {M N P, M N S, N N } V erbT ags \u2190 {V M } V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(JJ/RB) > (V M ) > (N N /N N S/N N P ) > (V AU X) > Others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We call this as the Jaccard_Score between the episode and the Query, if we divide Jaccard_Score by (Q \u2229 RN S), we get Normalized_Jac-card_Score. After calculating the Jaccard_Score and the Normalized_Jaccard_Score for each episode in Discourse Collection, we take the episode which has the highest Jaccard_Score, if two episodes have highest Jaccard_Score, we compare their Normalized_Jaccard_Score and choose the higher valued episode as our chosen episode. The pseudo code is given in Algorithm 1. The Output of this module is the Episode_Id(from the Discourse Collection) which has maximum similarity to the Query.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Episode Selector Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This Module generates answer for a given query. It takes the episode chosen by the Episode Selector, Discourse Collection, and the query as input and generates the answer according to the query type. Answer for various question types (Table 2) is generated as follows: Karta: We extract the Karta from the Discourse_Collection for the chosen episode along with Karta_Adpos(Karta_Adjective and Karta_Case_Markers) and give the answer as Karta_Adjective + Karta + Karta_Case_Marker. The answer to the question kIsnE zyAma kO kalama dI? [Karta Question Type] answer will be rAma nE (refer [Karma Question Type] answer will be cUhE kO (refer Figure 5) . Time: If the Parser_Output of the given episode contains the 'k7t' relation between two nodes, then the child node is the output. If there doesn't exist any 'k7t' relation, then 'k7' relation is used and the child node is the answer. If either of these aren't existing, then time is extracted from the 'Time' slot of the Discourse_Collection of the given episode and is displayed as the answer. The answer to the question sIta kaba pathzAlA gayI? [Time Question Type] answer will be subah (refer Figure 6 ). Loc: If the Parser_Output of the given episode contains the 'k7p' relation between two nodes, then the child node is the output. If there doesn't exist any 'k7p' relation, then 'k7' relation is used and the child node is the sIta subah pathzAlA gayI k7p k7t k1 answer. If either of these aren't existing, then location is extracted from the 'Loc' slot of the Discourse_Collection of the given episode and is the answer. The answer to the question sIta kaha gayI? [Loc Question Type] answer will be payhzAla (refer Figure 6) . Recipient: The main verb(MV) is extracted from the Parser_Output of the Episode. If the MV shares relation 'k4' with a child we return that child as the answer. If there doesn't exist any child with 'K4' relation, we check for any child nodes of the MV with 'k4a' relation and return that as the answer. The answer to the question rAma ne kisE kalama dI? [Recipient Question Type] answer will be zyAma (refer Figure 4) . Adj_Noun: From Table 2, we can see that the output is the Noun whose Adjective is asked in the question. Let the Noun whose adjective is asked be MN. We take the Parser_Output of the chosen episode and check for the child nodes of the MN who have relation 'nmod__adj' and return the child node as the answer. The answer to the question kalama kaisI hai? [Adj_Noun Question Type] answer will be suNdara (refer Figure 8) . Intf: From the 2, we can see that the Noun(MN) whose quantity is asked is returned along with the classification of the question.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 243, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 647, |
|
"text": "Figure 5)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1146, |
|
"end": 1154, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1672, |
|
"end": 1681, |
|
"text": "Figure 6)", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 2093, |
|
"end": 2102, |
|
"text": "Figure 4)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2514, |
|
"end": 2523, |
|
"text": "Figure 8)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rule Based Natural Language Generator Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To get the answer we take the Parser_Output of the chosen episode and search for the MN, then we check for child nodes of MN who have relation 'intf' with MN, lets call the child 'NounIntf', we then check child nodes who have relation 'nmod__adj' with MN, lets call it 'NounAdj'. If 'NounIntf' and 'NounAdj' exist, we return the answer as NounIntf + NounAdj and MN as the answer. If 'NounAdj' doesn't exist, we just return NounIntf + MN as the answer. The answer to the question rAma ne kitnE kalama dIyE? [Intf Question Type] answer will be Eka (refer Figure 4) . the Parser_Output of the Discourse Collection of the chosen episode. We then check if either of 'k1s', 'pof', 'k2' relations exist between the MV and its children. If it exists, we check if the child is mentioned in the question, if it isn't mentioned in the question, we return child Node as the answer. If no child exists with above mentioned relations or if it exists and the child has occurred in the question itself then, we check if the 'Given' slot of the discourse Collection is populated for the next episode and return the value in 'NEW' slot of the Discourse Collection as the answer. The answer to the question rAmA nE kyA dIyA? [Kya Question Type] answer will be kalama (refer Figure 4) . Kiske: From Table 2 , we observe that subject whose entity is asked is known. We consider this subject as Main Noun(MN), we check the children of MN who have relation 'k7' and return it as the answer. If this relation doesn't exist, we check for children with 'r6' relation and return it as the answer. 'r6' and 'k7' are called SambandhRelations in Panninian Grammar (Bharati et al., 1995) . The answer to the question kauvA kiske talAza meM udA? [Kiske Question Type] answer will be pAnI (refer Figure 7) . Kiska: We extract the Main Verb(MV) from the Parser_Output of the Discourse Collection for the particular episode. We check the children of MV which have relations in order, 'k2', 'k7', 'r6'. The answer to the question yaha kalama kiskI hai? [Kiska Question Type] answer will be rAma (refer Figure 8) . GivenNew: We can see from Table 2 , the information of GivenNew is given as the output of the Query Analyzer step. If the question is 'Given Known' (Refer 2.2), we choose the episode based on the highest Jaccard similarity (mentioned in 2.2) between the 'Given' slot of the Discourse Collection and the query. The Episode which gets the highest score, is the chosen episode and we return the 'New' slot value as the answer. Similarly for the 'New Known' (Refer 2.2) we choose the episode based on the 'New' slot and the answer is in 'Given' slot of the same episode. The answer to the question jaba rAma billI kE pAsa gayA taba kyA huA? [GivenKnown Question Type] the answer is billI bhAga gaI. (refer Figure 2 ) Default: In case an unknown question type is encountered or no answer has been given for the above question types, we return the Anaphora_Resolved_Sentence of the chosen episode as the answer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1634, |
|
"end": 1656, |
|
"text": "(Bharati et al., 1995)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 2017, |
|
"end": 2038, |
|
"text": "[Kiska Question Type]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2715, |
|
"end": 2741, |
|
"text": "[GivenKnown Question Type]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 553, |
|
"end": 562, |
|
"text": "Figure 4)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1255, |
|
"end": 1264, |
|
"text": "Figure 4)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1279, |
|
"end": 1286, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1763, |
|
"end": 1772, |
|
"text": "Figure 7)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2066, |
|
"end": 2075, |
|
"text": "Figure 8)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2104, |
|
"end": 2111, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2780, |
|
"end": 2788, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rule Based Natural Language Generator Module", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Panchatantra is a collection of fables. It has five parts, Mitra-Bheda (The loss of friends), Mitra-laabha (The winning of friends), Kakolukiyam (on crows and owls), Labdhapranasam(Losing what you have gained) and Apariksitakarakam (Ill-Considered actions). We have chosen a corpus of 65 stories from the tales across all parts of Panchantantra to test our system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We collected the stories from the link mentioned in footnote 2 and fixed the syntax (punctuation and spellings). We annotated 440 questions for the above stories and assigned the question types, the ideal episode to be selected from the Discourse Collection and the correct answer for each of the questions. While annotating the questions, we made a conscious effort to involve more questions in type 'Kya' and 'GivenNew' (refer Table 4 ), since they are versatile concepts and we intended to test them extensively against the rules we formatted as the other types are more or less intuitive on the dependency parser tags 3 . We randomly", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 436, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Original Sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Story", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Root Node Sentence 1 3/7 5/7 5/7 2 7/11 7/11 11/11 3 3/6 3/6 4/6 4 3/5 4/5 4/5 5 1/8 3/8 4/8 6 1/3 2/3 2/3 Table 3 : Episode selection accuracy selected 10 stories from the corpus along with the questions and generated rules based on linguistic heuristics, to avoid overfitting. We then verified the rules on the remaining stories. Out of 440 questions, 72 more questions were rightly answered on using weighted Jaccard similarity when compared to normal Jaccard similarity. That is, 16% episodes were rightly selected when weighted Jaccard similarity was used instead of normal Jaccard similarity on our model. We also compared episode selection accuracy by including modules mentioned in figure 1 on 6 random stories which are different from the previously selected stories. The results can be seen in ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Anaphora Resolved Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This model currently doesn't answer \u0915\u0948 \u0938\u0947 [How] types of questions, which can be included in future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Currently we don't resolve synonyms and antonyms to answer the questions, which when done, can improve Episode Selection algorithm and also aim at answering complex questions. The current model assumes the passage to be in chronological order. We can improve the model if we capture the relative time of the episode to suite the passages which don't follow the chronological order. Versatile questions such as 'GivenNew' and 'Kya' can be improved by increasing the scope of answer retrieval to multiple sentences or episodes around the selected episode unlike the single episode range implemented in our model. This will in-turn also increase the scope of model to include longer and complex texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Reading Comprehension is a complex task, which involves comprehending the passage and answering the questions following the passage. Once, we are able to structure this unstructured data(passage), we can answer the questions relatively well without complex approaches. The rules in linguistic are intuitive and are capable of answering complex questions. Since it's rule based, there is no requirement of large data to obtain promising results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the model we managed to get 75% accuracy with just 65 stories and managed to answer wide range of answers. This model is versatile and can be extended to other Indian languages provided the dependency parser (similar to one we used 3 ) exists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://www.hindisahityadarpan.in/2016/06/panchatantra-completestories-hindi.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://bitbucket.org/iscnlp/parser/src/master/README.rst", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ltrc.iiit.ac.in/analyzer/hindi/index.cgi", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Natural language processing: a Paninian perspective", |
|
"authors": [ |
|
{ |
|
"first": "Akshar", |
|
"middle": [], |
|
"last": "Bharati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vineet", |
|
"middle": [], |
|
"last": "Chaitanya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajeev", |
|
"middle": [], |
|
"last": "Sangal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Ramakrishnamacharyulu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akshar Bharati, Vineet Chaitanya, Rajeev Sangal, and KV Ramakrishnamacharyulu. 1995. Natu- ral language processing: a Paninian perspective. Prentice-Hall of India New Delhi.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Ssf: Shakti standard format guide", |
|
"authors": [ |
|
{ |
|
"first": "Akshar", |
|
"middle": [], |
|
"last": "Bharati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajeev", |
|
"middle": [], |
|
"last": "Sangal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipti M", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akshar Bharati, Rajeev Sangal, and Dipti M Sharma. 2007. Ssf: Shakti standard format guide.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Anaphora Resolution in Hindi", |
|
"authors": [ |
|
{ |
|
"first": "Praveen", |
|
"middle": [], |
|
"last": "Dakwale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Praveen Dakwale. 2014. Anaphora Resolution in Hindi. Ph.D. thesis, PhD thesis, International Institute of Information Technology Hyderabad.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Prashnottar: a hindi question answering system", |
|
"authors": [ |
|
{ |
|
"first": "Shriya", |
|
"middle": [], |
|
"last": "Sahu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nandkishor", |
|
"middle": [], |
|
"last": "Vasnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devshri", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "International Journal of Computer Science & Information Technology", |
|
"volume": "4", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shriya Sahu, Nandkishor Vasnik, and Devshri Roy. 2012. Prashnottar: a hindi question answering system. International Journal of Computer Sci- ence & Information Technology, 4(2):149.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cognitive structures in comprehension and memory of narrative discourse", |
|
"authors": [ |
|
{ |
|
"first": "Perry", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Thorndyke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Cognitive Psychology", |
|
"volume": "9", |
|
"issue": "1", |
|
"pages": "77--110", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/0010-0285(77)90005-6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Perry W. Thorndyke. 1977. Cognitive structures in comprehension and memory of narrative dis- course. Cognitive Psychology, 9(1):77 -110.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Aqua: an ontology driven question answering system", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Dr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrico", |
|
"middle": [], |
|
"last": "Vargas-Vera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Motta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Domingue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dr. Maria Vargas-Vera, Enrico Motta, and John Domingue. 2003. Aqua: an ontology driven question answering system.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Figure 1: Comprehension Design", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "the 'Karma' slot of the Discourse_Collection isn't populated, the Anaphora_Resolved_Sentence of the chosen episode is returned. The answer to the question siMha ne kiskO apnA dosta banAyA?", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Loc and Time", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Kya: We extract the Main Verb (MV)", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "Types of Questions.", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "The question that gives an activity and requests for the consequence of the activity falls in category of GivenKnown. The questions which describe an activity and expects the cause of the activity as the output, it falls in category of NewKnown.", |
|
"html": null, |
|
"content": "<table><tr><td>Query</td><td>Question Analyzer</td><td>Episode Selector</td><td colspan=\"2\">Rule Based NLG</td><td>Answer</td></tr><tr><td/><td/><td colspan=\"2\">Discourse</td><td/></tr><tr><td/><td/><td>Collection</td><td/><td/></tr><tr><td/><td colspan=\"3\">Figure 3: Question Answering Design</td><td/></tr><tr><td colspan=\"3\">\u092a\u0930\u0947 \u0936\u093e\u0928 \u093f\u0915\u092f\u093e? (Who troubled the monkey?) Karma: The question which requires the an-swer as the act of the sentence. \u092c\u0932\u094d\u0940 \u093f\u0915\u0938\u0915\u094b \u093f\u0926\u0916\u0940? (Who saw the cat?)</td><td>Question Type Karta Karma</td><td colspan=\"2\">Output Format ['Karta'] ['Karma']</td></tr><tr><td/><td/><td/><td>Time</td><td>['Time']</td></tr><tr><td/><td/><td/><td>Loc</td><td>['Loc']</td></tr><tr><td/><td/><td/><td>Recipient</td><td colspan=\"2\">['Recipient']</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">['Adj_Noun', one word</td></tr><tr><td/><td/><td/><td>Adj_Noun</td><td colspan=\"2\">before the question</td></tr><tr><td/><td/><td/><td/><td>word]</td></tr><tr><td/><td/><td/><td>Intf</td><td colspan=\"2\">['Intf', one word after the question word]</td></tr><tr><td/><td/><td/><td>Kya</td><td>['Kya']</td></tr><tr><td/><td/><td/><td>Kiske</td><td colspan=\"2\">['Kiske', one word after the question word]</td></tr><tr><td/><td/><td/><td>Kiska</td><td colspan=\"2\">['Kiska']</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">['GivenNew', the infor-</td></tr><tr><td/><td/><td/><td>GivenNew</td><td colspan=\"2\">mation which is either</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">new or given]</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Output of the Question Analyzer.", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Overall accuracy of the answers based on question Types is Shown inTable 4. The figure clearly shows good accuracy for majority of the questions. Since the 'Kya' and 'GivenNew' format of the questions are versatile and the answers can be subjective, the", |
|
"html": null, |
|
"content": "<table><tr><td>for the same. It can</td></tr><tr><td>be observed that the weighted Jaccard sim-</td></tr><tr><td>ilarity accuracy improved consistently as we</td></tr><tr><td>added the Root Node Resolution and then</td></tr><tr><td>the Anaphora Resolution modules. The be-</td></tr><tr><td>low example demonstrates the improvement in</td></tr><tr><td>episode selection accuracy for different mod-</td></tr><tr><td>ules:</td></tr><tr><td>Question: \u0930\u093e\u092e \u0915\u094d\u092f\u093e \u0916\u093e \u0930\u0939\u093e \u0925\u093e? [Translation:</td></tr><tr><td>What was Ram eating?]</td></tr><tr><td>Original Sentence: \u0909\u0938\u0928\u0947 \u0906\u092e \u0916\u093e\u092f\u093e. [Transla-</td></tr><tr><td>tion: He ate a mango] Jaccard Score: 0 Root Node Sentence: \u0935\u0939 \u0906\u092e \u0916\u093e [Translation:</td></tr><tr><td>He eat mango] Jaccard Score: 4 Anaphora Resolved Sentence: \u0916\u093e[Translation: Ram eat mango] Jaccard \u0930\u093e\u092e \u0906\u092e</td></tr><tr><td>Score: 7</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "Accuracy of the answers accuracy for these categories cannot be comparable to the direct question types whose answers are obtainable through dependency parser tags solely. Overall accuracy of the system is 75.45%.", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |