{ "paper_id": "I11-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:32:30.388309Z" }, "title": "Learning Logical Structures of Paragraphs in Legal Articles", "authors": [ { "first": "Xuan", "middle": [], "last": "Ngo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Japan Advanced Institute of Science and Technology", "location": { "addrLine": "1-1 Asahidai", "postCode": "923-1292", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "" }, { "first": "Nguyen", "middle": [], "last": "Bach", "suffix": "", "affiliation": { "laboratory": "", "institution": "Japan Advanced Institute of Science and Technology", "location": { "addrLine": "1-1 Asahidai", "postCode": "923-1292", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "bachnx@jaist.ac.jp" }, { "first": "Tran", "middle": [ "Thi" ], "last": "Le Minh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Japan Advanced Institute of Science and Technology", "location": { "addrLine": "1-1 Asahidai", "postCode": "923-1292", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "" }, { "first": "Akira", "middle": [], "last": "Oanh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Japan Advanced Institute of Science and Technology", "location": { "addrLine": "1-1 Asahidai", "postCode": "923-1292", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "oanhtt@jaist.ac.jp" }, { "first": "", "middle": [], "last": "Shimazu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Japan Advanced Institute of Science and Technology", "location": { "addrLine": "1-1 Asahidai", "postCode": "923-1292", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "shimazu@jaist.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a new task, learning logical structures of paragraphs in legal articles, which is studied in research on Legal Engineering (Katayama, 2007). The goals of this task are recognizing logical parts of law sentences in a paragraph, and then grouping related logical parts into some logical structures of formulas, which describe logical relations between logical parts. We present a two-phase framework to learn logical structures of paragraphs in legal articles. In the first phase, we model the problem of recognizing logical parts in law sentences as a multi-layer sequence learning problem, and present a CRF-based model to recognize them. In the second phase, we propose a graph-based method to group logical parts into logical structures. We consider the problem of finding a subset of complete sub-graphs in a weighted-edge complete graph, where each node corresponds to a logical part, and a complete sub-graph corresponds to a logical structure. We also present an integer linear programming formulation for this optimization problem. Our models achieve 74.37% in recognizing logical parts, 79.59% in recognizing logical structures, and 55.73% in the whole task on the Japanese National Pension Law corpus.", "pdf_parse": { "paper_id": "I11-1003", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a new task, learning logical structures of paragraphs in legal articles, which is studied in research on Legal Engineering (Katayama, 2007). The goals of this task are recognizing logical parts of law sentences in a paragraph, and then grouping related logical parts into some logical structures of formulas, which describe logical relations between logical parts. We present a two-phase framework to learn logical structures of paragraphs in legal articles. In the first phase, we model the problem of recognizing logical parts in law sentences as a multi-layer sequence learning problem, and present a CRF-based model to recognize them. In the second phase, we propose a graph-based method to group logical parts into logical structures. We consider the problem of finding a subset of complete sub-graphs in a weighted-edge complete graph, where each node corresponds to a logical part, and a complete sub-graph corresponds to a logical structure. We also present an integer linear programming formulation for this optimization problem. Our models achieve 74.37% in recognizing logical parts, 79.59% in recognizing logical structures, and 55.73% in the whole task on the Japanese National Pension Law corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Legal Engineering (Katayama, 2007) is a new research field which aims to achieve a trustworthy electronic society. Legal Engineering regards that laws are a kind of software for our society. Specifically, laws such as pension law are specifications for information systems such as pension systems.", "cite_spans": [ { "start": 18, "end": 34, "text": "(Katayama, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To achieve a trustworthy society, laws need to be verified about their consistency and contradiction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Legal texts have some specific characteristics that make them different from other kinds of documents. One of the most important characteristics is that legal texts usually have some specific structures at both sentence and paragraph levels. At the sentence level, a law sentence can roughly be divided into two logical parts: requisite part and effectuation part (Bach, 2011a; Bach et al., 2011b; Tanaka eta al., 1993) . At the paragraph level, a paragraph usually contains a main sentence 1 and one or more subordinate sentences (Takano et al., 2010) .", "cite_spans": [ { "start": 364, "end": 377, "text": "(Bach, 2011a;", "ref_id": "BIBREF0" }, { "start": 378, "end": 397, "text": "Bach et al., 2011b;", "ref_id": "BIBREF1" }, { "start": 398, "end": 419, "text": "Tanaka eta al., 1993)", "ref_id": null }, { "start": 531, "end": 552, "text": "(Takano et al., 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Analyzing logical structures of legal texts is an important task in Legal Engineering. The outputs of this task will be beneficial to people in understanding legal texts. They can easily understand 1) what does a law sentence say? 2) what cases in which the law sentence can be applied? and 3) what subjects are related to the provision described in the law sentence? This task is the preliminary step, which supports other tasks in legal text processing (translating legal articles into logical and formal representations, legal text summarization, legal text translation, question answering in legal domains, etc) and serves legal text verification, an important goal of Legal Engineering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been some studies analyzing logical structures of legal texts. (Bach et al., 2011b) presents the RRE task 2 , which recognizes the logical structure of law sentences. (Bach et al., 2010) describes an investigation on contributions of words to the RRE task. (Kimura et al., 2009) focuses on dealing with legal sentences including itemized and referential expressions. These works, however, only analyze logical structures of legal texts at the sentence level. At the paragraph level, (Takano et al., 2010) classifies a legal paragraph into one of six predefined categories: A, B, C, D, E, and F . Among six types, Type A, B, and C correspond to cases in which the main sentence is the first sentence, and subordinate sentences are other sentences. In paragraphs of Type D, E, and F , the main sentence is the first or the second sentence, and a subordinate sentence is an embedded sentence in parentheses within the main sentence.", "cite_spans": [ { "start": 74, "end": 94, "text": "(Bach et al., 2011b)", "ref_id": "BIBREF1" }, { "start": 178, "end": 197, "text": "(Bach et al., 2010)", "ref_id": "BIBREF2" }, { "start": 268, "end": 289, "text": "(Kimura et al., 2009)", "ref_id": null }, { "start": 494, "end": 515, "text": "(Takano et al., 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a task of learning logical structures of legal articles at the paragraph level. We propose a two-phase framework to complete the task. We also describe experimental results on real legal data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions can be summarized in the following points:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Introducing a new task to legal text processing, learning logical structures of paragraphs in legal articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Presenting an annotated corpus for the task, the Japanese National Pension Law corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Proposing a two-phase framework and providing solutions to solve the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Evaluating our framework on the real annotated corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 describes our task and its two sub-tasks: recognition of logical parts and recognition of logical structures. In Section 3, we present our framework and proposed solutions. Experimental results on real legal articles are described in Section 4. Finally, Section 5 gives some conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Learning logical structures of paragraphs in legal articles is the task of recognition of logical structures between logical parts in law sentences. A logical structure is usually formed from a pair of a requisite part and an effectuation part. These two parts are built from other kinds of logical parts such as topic parts, antecedent parts, consequent parts, and so on (Bach, 2011a; Bach et al., 2011b) 3 . Usually, consequent parts describes a law provision, antecedent parts describes cases in which the law provision can be applied, and topic parts describe subjects which are related to the law provision. In this paper, a logical structure can be defined as a set of some related logical parts. Figure 1 shows two cases of the inputs and outputs of the task. In the first case, the input is a paragraph of two sentences, and the outputs are four logical parts, which are grouped into two logical structures. In the second case, the input is a paragraph consisting of four sentences, and the outputs are four logical parts, which are grouped into three logical structures. An example in natural language 4 is presented in Figure 2 .", "cite_spans": [ { "start": 372, "end": 385, "text": "(Bach, 2011a;", "ref_id": "BIBREF0" }, { "start": 386, "end": 405, "text": "Bach et al., 2011b)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 703, "end": 711, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1129, "end": 1137, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Formulation", "sec_num": "2" }, { "text": "Let s be a law sentence in the law sentence space S, then s can be represented by a sequence of words s = [w 1 w 2 . . . w n ]. A legal paragraph x in the legal paragraph space X is a sequence of law sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "x = [s 1 s 2 . . . s l ], where s i \u2208 S, \u2200i = 1, 2, . . . , l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "For each paragraph x, we denote a log-ical part p by a quad-tuple p = (b, e, k, c) where b, e, and k are three integers which indicate position of the beginning word, position of the end word, and sentence position of p, and c is a logical part category in the set of predefined categories C. Formally, the set P of all possible logical parts defined in a paragraph x can be described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "P = {(b, e, k, c)|1 \u2264 k \u2264 l, 1 \u2264 b \u2264 e \u2264 len(k), c \u2208 C}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "In the above definition, l is the number of sentences in the paragraph x, and len(k) is the length of the k th sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "In this sub-task, we want to recognize some non-overlapping (but possibly embedded) logical parts in an input paragraph. A solution for this task is a subset y \u2286 P which does not violate the overlapping relationship. We say that two logical parts p 1 and p 2 are overlapping if and only if they are in the same sentence (k 1 = k 2 ) and b 1 < b 2 \u2264 e 1 < e 2 or b 2 < b 1 \u2264 e 2 < e 1 . We denote the overlapping relationship by \u223c. We also say that p 1 is embedded in p 2 if and only if they are in the same sentence (k 1 = k 2 ) and b 2 \u2264 b 1 \u2264 e 1 \u2264 e 2 , and denote the embedded relationship by \u227a. Formally, the solution space can be described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "Y = {y \u2286 P |\u2200u, v \u2208 y, u \u223c v}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "The learning problem in this sub-task is to learn a function R : X \u2192 Y from a set of m training samples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "{(x i , y i )|x i \u2208 X, y i \u2208 Y, \u2200i = 1, 2, . . . , m}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "In our task, we consider the following types of logical parts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "1. An antecedent part is denoted by A 2. A consequent part is denoted by C 3. A topic part which depends on the antecedent part is denoted by T 1 4. A topic part which depends on the consequent part is denoted by T 2 5. A topic part which depends on both the antecedent part and the consequent part is denoted by T 3 6. The left part of an equivalent statement is denoted by EL 7. The right part of an equivalent statement is denoted by ER 8. An object part, whose meaning is defined differently in different cases, is denoted by Ob 9. An original replacement part, which will be replaced by other replacement parts (denoted by RepR) in specific cases, is denoted by RepO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "Compared with previous works (Bach et al., 2011b) , we introduce three new kinds of logical parts: Ob, RepO, and RepR.", "cite_spans": [ { "start": 29, "end": 49, "text": "(Bach et al., 2011b)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 1: Recognition of Logical Parts", "sec_num": "2.1" }, { "text": "In the second sub-task, the goal is to recognize a set of logical structures given a set of logical parts. Let G =< V, E > be a complete undirected graph with the vertex set V and the edge set E. A real value function f is defined on E as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "f : E \u2192 R, e \u2208 E \u2192 f (e) \u2208 R. In this sub-task, each vertex of the graph corresponds to a logical part, and a complete sub-graph corresponds to a logical structure. The value on an edge connecting two vertices expresses the degree that the two vertices belong to one logical structure. The positive (negative) value means that two vertices are likely (not likely) to belong to one logical structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "Let G s be a complete sub-graph of G, then v(G s ) and e(G s ) are the set of vertices and the set of edges of G s , respectively. We define the total value of a sub-graph as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "f (G s ) = f (e(G s )) = e\u2208e(Gs) f (e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "Let \u2126 be the set of all complete sub-graphs of G. The problem becomes determining a subset \u03a8 \u2286 \u2126 that satisfies the following constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "1. \u2200g \u2208 \u03a8, |v(g)| \u2265 2, 2. \u222a g\u2208\u03a8 v(g) = V , 3. \u2200g 1 , g 2 \u2208 \u03a8|v(g 1 ) \u2286 v(g 2 ) \u21d2 v(g 1 ) = v(g 2 ), 4. \u2200g \u2208 \u03a8, \u222a h\u2208\u03a8,h =g v(h) = V , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "5. g\u2208\u03a8 f (g) \u2192 maximize. Constraint 1), minimal constraint, says that each logical structure must contain at least two logical parts. There is the case that a logical structure contains only a consequent part. Due to the characteristics of Japanese law sentences, however, our corpus does not contain such cases. A logical structure which contains a consequent part will also contain a topic part or an antecedent part or both of them. So a logical structure contains at least two logical parts. Constraint 2), complete constraint, says that each logical part must belong to at least one logical structure. Constraint 3), maximal constraint, says that we cannot have two different logical structures such that the set of logical parts in one logical structure contains the set of logical parts in the other logical structure. Constraint 4), significant constraint, says that if we remove any logical structure from the solution, Constraint 2) will be violated. Although Constraint 3) is guaranteed by Constraint 4), we introduce it because of its importance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "3 Proposed Solutions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-Task 2: Recognition of Logical Structures", "sec_num": "2.2" }, { "text": "This sub-section presents our model for recognizing logical parts. We consider the recognition problem as a multi-layer sequence learning problem. First, we give some related notions. Let s be a law sentence, and P be the set of logical parts of s, P = {p 1 , p 2 , . . . , p m }. Layer 1 (s) (outer most layer) is defined as a set of logical parts in P , which are not embedded in any other part. Layer i (s) is defined as a set of logical parts in P \\ \u222a i\u22121 k=1 Layer k (s), which are not embedded in any other part in P \\\u222a i\u22121 k=1 Layer k (s). Formally, we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-layer Sequence Learning for Logical Part Recognition", "sec_num": "3.1" }, { "text": "Layer 1 (s) = {p|p \u2208 P, p \u227a q, \u2200q \u2208 P, q = p}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-layer Sequence Learning for Logical Part Recognition", "sec_num": "3.1" }, { "text": "Layer Figure 3 illustrates a law sentence with four logical parts in three layers: Part 1 and Part 2 in Layer 1 , Part 3 in Layer 2 , and Part 4 in Layer 3 . Let K be the number of layers in a law sentence s, our model will recognize logical parts in K steps. In the k th step we recognize logical parts in Layer k . In each layer, we model the recognition problem as a sequence labeling task in which each word is an element. Logical parts in Layer i\u22121 will be used as input sequence in the i th step (in the first step, we use original sentence as input). Figure 4 gives an example of labeling for an input sentence. The sentence consists of three logical parts in two layers. In our model, we use IOE tag setting: the last element of a part is tagged with E, the other elements of a part are tagged with I, and an element not included in any part is tagged with O.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 14, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 558, "end": 566, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Multi-layer Sequence Learning for Logical Part Recognition", "sec_num": "3.1" }, { "text": "i (s) = {p|p \u2208 Q i , p \u227a q, \u2200q \u2208 Q i , q = p}, where Q i = P \\ \u222a i\u22121 k=1 Layer k (s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-layer Sequence Learning for Logical Part Recognition", "sec_num": "3.1" }, { "text": "Let K * be the maximum number of layers in all law sentences in training data. We learn K * models, in which the k th model is learned from logical parts in the Layer k of training data, using Conditional random fields (Lafferty et al., 2001; Kudo, CRF toolkit) . In the testing phase, we first apply the first model to the input law sentence, and then apply the i th model to the predicted logical parts in Layer i\u22121 .", "cite_spans": [ { "start": 219, "end": 242, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF12" }, { "start": 243, "end": 261, "text": "Kudo, CRF toolkit)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multi-layer Sequence Learning for Logical Part Recognition", "sec_num": "3.1" }, { "text": "Suppose that G is a sub-graph of G such that G contains all the vertices of G and the degree of each vertex in G is greater than zero, then the set of all the maximal complete sub-graphs (or cliques) of G will satisfy all the minimal, complete, maximal, and significant constraints. We also note that, a set of cliques that satisfies all these four constraints will form a sub-graph that has two properties like properties of G .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP for Recognizing Logical Structures", "sec_num": "3.2" }, { "text": "Let \u039b be the set of all such sub-graphs G of G, the sub-task now consists of two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP for Recognizing Logical Structures", "sec_num": "3.2" }, { "text": "1. Finding G = argmax G \u2208\u039b f (G ), and 2. Finding all cliques of G .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP for Recognizing Logical Structures", "sec_num": "3.2" }, { "text": "Each clique found in the second step will correspond to a logical structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP for Recognizing Logical Structures", "sec_num": "3.2" }, { "text": "Recently, some researches have shown that integer linear programming (ILP) formulations is an effective way to solve many NLP problems such as semantic role labeling (Punyakanok, 2004) , coreference resolution (Denis and Baldridge, 2007) , summarization (Clarke and Lapata, 2008) , dependency parsing (Martins et al., 2009) , and so on. The advantage of ILP formulations is that we can incorporate non-local features or global constraints easily, which are difficult in traditional algorithms. Although solving an ILP is NP-hard in general, some fast algorithms and available tools 5 make it a practical solution for many NLP problems (Martins et al., 2009) .", "cite_spans": [ { "start": 166, "end": 184, "text": "(Punyakanok, 2004)", "ref_id": "BIBREF15" }, { "start": 210, "end": 237, "text": "(Denis and Baldridge, 2007)", "ref_id": "BIBREF7" }, { "start": 254, "end": 279, "text": "(Clarke and Lapata, 2008)", "ref_id": "BIBREF6" }, { "start": 301, "end": 323, "text": "(Martins et al., 2009)", "ref_id": "BIBREF13" }, { "start": 635, "end": 657, "text": "(Martins et al., 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "ILP for Recognizing Logical Structures", "sec_num": "3.2" }, { "text": "In this work, we exploit ILP to solve the first step. Let N be the number of vertices of G, we introduce a set of integer variables {x ij } 1\u2264iLogical PartCAT 1 T 2 T 3 EL ER Ob RepO RepRNumber248 286 0 114 12 55 5791214", "text": "Statistics on logical parts of the JNPL corpus", "num": null }, "TABREF1": { "html": null, "type_str": "table", "content": "
: Experimental results for Sub-task 1 on the
JNLP corpus(W:Word; P: POS tag; B: Bunsetsu
tag)
ModelPrec(%) Recall(%) F 1 (%)
Baseline79.7052.5463.33
W79.1869.2773.89
W+P77.6268.7772.93
W+B W+P+B79.63 77.8969.76 69.3974.37 73.39
", "text": "", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
Logical Part Prec(%) Recall(%) F 1 (%)
C EL83.41 76.7475.00 60.0078.98 67.35
ER41.9422.8129.55
Ob0.000.000.00
A RepO80.42 10080.42 16.6780.42 28.57
RepR10028.5744.44
T 2 T 3 Overall83.64 60.00 79.6380.70 25.00 69.7682.14 35.29 74.37
", "text": "Experimental results in more details", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
Experiments on Sub-task 2
Gold Input Setting
Model Heuristic ILPPrec(%) Recall(%) F 1 (%) 71.19 75.89 81.24 76.56 82.87 79.59
End-to-End Setting
ModelPrec(%) Recall(%) F 1 (%)
Heuristic54.8847.8451.12
ILP57.5154.0655.73
", "text": "", "num": null } } } }