Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
73.1 kB
{
"paper_id": "I08-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:42:37.073911Z"
},
"title": "Experiments on Semantic-based Clustering for Cross-document Coreference",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"addrLine": "211 Portobello Street -Sheffield",
"postCode": "S1 4DP",
"country": "England, UK"
}
},
"email": "saggion@dcs.shef.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe clustering experiments for cross-document coreference for the first Web People Search Evaluation. In our experiments we apply agglomerative clustering to group together documents potentially referring to the same individual. The algorithm is informed by the results of two different summarization strategies and an offthe-shelf named entity recognition component. We present different configurations of the system and show the potential of the applied techniques. We also present an analysis of the impact that semantic information and text summarization have in the clustering process.",
"pdf_parse": {
"paper_id": "I08-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe clustering experiments for cross-document coreference for the first Web People Search Evaluation. In our experiments we apply agglomerative clustering to group together documents potentially referring to the same individual. The algorithm is informed by the results of two different summarization strategies and an offthe-shelf named entity recognition component. We present different configurations of the system and show the potential of the applied techniques. We also present an analysis of the impact that semantic information and text summarization have in the clustering process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Finding information about people on huge text collections or on-line repositories on the Web is a common activity. In ad-hoc Internet retrieval, a request for documents/pages referring to a person name may return thousand of pages which although containing the name, do not refer to the same individual. Crossdocument coreference is the task of deciding if two entity mentions in two sources refer to the same individual. Because person names are highly ambiguous (i.e., names are shared by many individuals), deciding if two documents returned by a search engine such as Google or Yahoo! refer to the same individual is a difficult problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic techniques for solving this problem are required not only for better access to information but also in natural language processing applications such as multidocument summarization, question answering, and information extraction. Here, we concentrate on the Web People Search Task (Artiles et al., 2007) as defined in the SemEval 2007 Workshop: a search engine user types in a person name as a query. Instead of ranking web pages, an ideal system should organise search results in as many clusters as there are different people sharing the same name in the documents returned by the search engine. The input is, therefore, the results given by a web search engine using a person name as query. The output is a number of sets, each containing documents referring to the same individual. The task is related to the coreference resolution problem disregarding however the linking of mentions of the target entity inside each single document.",
"cite_spans": [
{
"start": 290,
"end": 312,
"text": "(Artiles et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similarly to (Bagga and Baldwin, 1998; Phan et al., 2006) , we have addressed the task as a document clustering problem. We have implemented our own clustering algorithms but rely on available extraction and summarization technology to produce document representations used as input for the clustering procedure. We will shown that our techniques produce not only very good results but are also very competitive when compared with SemEval 2007 systems. We will also show that carefully selection of document representation is of paramount importance to achieve good performance. Our system has a similar level of performance as the best system in the recent SemEval 2007 evaluation framework. This paper extends our previous work on this task (Saggion, 2007) .",
"cite_spans": [
{
"start": 13,
"end": 38,
"text": "(Bagga and Baldwin, 1998;",
"ref_id": "BIBREF1"
},
{
"start": 39,
"end": 57,
"text": "Phan et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 743,
"end": 758,
"text": "(Saggion, 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The SemEval evaluation has prepared two sets of data to investigate the cross-document coreference problem: one for development and one for testing. The data consists of around 100 Web files per person name, which have been frozen and so, can be used as an static corpus. Each file in the corpus is associated with an integer number which indicates the rank at which the particular page was retrieved by the search engine. In addition to the files themselves, the following information was available: the page title, the url, and the snippet. In addition to the data itself, human assessments are provided which are used for evaluating the output of the automatic systems. The assessment for each person name is a file which contains a number of sets where each set is assumed to contain all (and only those) pages that refer to one individual. The development data is a selection of person names from different sources such as participants of the European Conference on Digital Libraries (ECDL) 2006 and the on-line encyclopaedia Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "2"
},
{
"text": "The test data to be used by the systems consisted of 30 person names from different sources: (i) 10 names were selected from Wikipedia; (ii) 10 names were selected from participants in the ACL 2006 conference; and finally, (iii) 10 further names were selected from the US Census. One hundred documents were retrieved using the person name as a query using the search engine Yahoo!.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "2"
},
{
"text": "Metrics used to measure the performance of automatic systems against the human output were borrowed from the clustering literature (Hotho et al., 2003) and they are defined as follows:",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "(Hotho et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "2"
},
{
"text": "Precision(A, B) = |A \u2229 B| |A| Purity(C, L) = n i=1 |Ci| n maxjPrecision(Ci, Lj) Inverse Purity(C, L) = n i=1 |Li| n maxjPrecision(Li, Cj) F-Score\u03b1(C, L) = Purity(C, L) * Inverse Purity(C, L) \u03b1Purity(C, L) + (1 \u2212 \u03b1)Inverse Purity(C, L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "2"
},
{
"text": "where C is the set of clusters to be evaluated and L is the set of clusters produced by the human. Note that purity is a kind of precision metric which rewards a partition which has less noise. Inverse purity is a kind of recall metric. \u03b1 was set to 0.5 in the SemEval 2007 evaluation. Two simple baseline systems were defined in order to measure if the techniques used by participants were able to improve over them. The all-in-one baseline produces one single cluster -all documents belonging to that cluster. The one-in-one baseline produces n cluster with one different document in each cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "2"
},
{
"text": "Clustering is an important technique used in areas such as information retrieval, text mining, and data mining (Cutting et al., 1992) . Clustering algorithms combine data points into groups such that: (i) data points in the same group are similar to each other; and (ii) data points in one group are \"different\" from data points in a different group or cluster. In information retrieval it is assumed that documents that are similar to each other are likely to be relevant for the same query, and therefore having the document collection organised in clusters can provide improved document access (van Rijsbergen, 1979) . Different clustering techniques exist (Willett, 1988) the simplest one being the one-pass clustering algorithm (Rasmussen and Willett, 1987) . We have implemented an agglomerative clustering algorithm which is relatively simple, has reasonable complexity, and gave us rather good results. Our algorithm operates in an exclusive way, meaning that a document belongs to one and only one cluster -while this is our working hypothesis, it might not be valid in some cases.",
"cite_spans": [
{
"start": 111,
"end": 133,
"text": "(Cutting et al., 1992)",
"ref_id": "BIBREF4"
},
{
"start": 597,
"end": 619,
"text": "(van Rijsbergen, 1979)",
"ref_id": null
},
{
"start": 660,
"end": 675,
"text": "(Willett, 1988)",
"ref_id": "BIBREF17"
},
{
"start": 733,
"end": 762,
"text": "(Rasmussen and Willett, 1987)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "The input to the algorithm is a set of document representations implemented as vectors of terms and weights. Initially, there are as many clusters as input documents; as the algorithm proceeds clusters are merged until a certain termination condition is reached. The algorithm computes the similarity between vector representations in order to decide whether or not to merge two clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "The similarity metric we use is the cosine of the angle between two vectors. This metric gives value one for identical vectors and zero for vectors which are orthogonal (non related). Various options have been implemented in order to measure how close two clusters are, but for the experiments reported here we have used the following approach: the similarity between two clusters (sim C ) is equivalent to the \"document\" similarity (sim D ) between the two more similar documents in the two clusters -this is known as single linkage in the clustering literature; the following formula is used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "simC (C1,C2) = max d i \u2208C 1 ;d j \u2208C 2 simD(di,dj)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "Where C k are clusters, d l are document representations (e.g., vectors), and sim D is the cosine metric given by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "cosine(d1, d2) = n i=1 w i,d 1 * w i,d 2 n i=1 (w i,d 1 ) 2 * n i=1 (w i,d 2 ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "where w i,d is the weight of term i in document d and n is the numbers of terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "If this similarity is greater than a threshold -experimentally obtained -the two clusters are merged together. At each iteration the most similar pair of clusters is merged. If this similarity is less than a certain threshold the algorithm stops. Merging two clusters consist of a simple step of set union, so there is no re-computation involved -such as computing a cluster centroid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "We estimated the threshold for the clustering algorithm using the ECDL subset of the training data provided by SemEval. We applied the clustering algorithm where the threshold was set to zero. For each document set, purity, inverse purity, and Fscore were computed at each iteration of the algorithm, recording the similarity value of each newly created cluster. The similarity values for the best clustering results (best F-score) were recorded, and the maximum and minimum values discarded. The rest of the values were averaged to obtain an estimate of the optimal threshold. The thresholds used for the experiments reported here are as follows: 0.10 for word vectors and 0.12 for named entity vectors (see Section 5 for vector representations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agglomerative Clustering Algorithm",
"sec_num": "3"
},
{
"text": "We rely on available extraction and summarization technology in order to linguistically process the documents for creating document representations for clustering. Although the SemEval corpus contains information other than the retrieved pages themselves, we have made no attempt to analyse or use contextual information given with the input document. Two tools are used: the GATE system (Cunningham et al., 2002) and a summarization toolkit (Saggion, 2002; Saggion and Gaizauskas, 2004) which is compatible with GATE. The input for analysis is a set of documents and a person name (first name and last name). The documents are analysed by the default GATE 1 ANNIE system which creates different types of named entity annotations. No adaptation of the system was carried out because we wanted to verify how far we could go using available tools. Summarization technology was used from single document summarization modules from our summarization toolkit.",
"cite_spans": [
{
"start": 388,
"end": 413,
"text": "(Cunningham et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 442,
"end": 457,
"text": "(Saggion, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 458,
"end": 487,
"text": "Saggion and Gaizauskas, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "The core of the toolkit is a set of summarization modules which compute numeric features for each sentence in the input document, the value of the feature indicates how relevant the information in the sentence is for the feature. The computed values, which are normalised yielding numbers in the interval [0..1] -are combined in a linear formula to obtain a score for each sentence which is used as the basis for sentence selection. Sentences are ranked based on their score and top ranked sentences selected to produce an extract. Many features implemented in this tool have been suggested in past research as valuable for the task of identifying sentences for creating summaries. In this work, summaries are created following two different approaches as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "The text and linguistic processors used in our system are: document tokenisation to identify different kinds of words; sentence splitting to segment the text into units used by the summariser; parts-of-speech tagging used for named entity recognition; named entity recognition using a gazetteer lookup module and regular expressions grammars; and named entity coreference module using a rule-based orthographic name matcher to identify name mentions considered equivalent (e.g., \"John Smith\" and \"Mr. Smith\"). Named entities of type Person, Organization, Address, Date, and Location are considered relevant document terms and stored in a special named entity called Mention as an annotation. The performance of the named entity recogniser on Web data (business news from the Web) is around 0.90 F-score (Maynard et al., 2003) .",
"cite_spans": [
{
"start": 803,
"end": 825,
"text": "(Maynard et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "Coreference chains are created and analysed and if they contain an entity matching the target person's surname, all elements of the chain are marked as a feature of the annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "We have tested two summarization conditions in this work: In one set of experiments a sentence belongs to a summary if it contains a mention which is coreferent with the target entity. In a second set of experiments a sentence belongs to a summary if it contains a \"biographical pattern\". We rely on a number of patterns that have been proposed in the past to identify descriptive phrases in text collections (Joho and Sanderson, 2000) . The patterns used in the experiments described here are shown in Table 1. In the patterns, dp is a descriptive phrase that in (Joho and Sanderson, 2000) is taken as a noun phrase. These patterns are likely to capture information which is relevant to create person profiles, as used in DUC 2004 and in TREC QA -to answer definitional questions.",
"cite_spans": [
{
"start": 409,
"end": 435,
"text": "(Joho and Sanderson, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 564,
"end": 590,
"text": "(Joho and Sanderson, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "These patterns are implemented as regular expressions using the JAPE language (Cunningham et al., 2002) . Our implementation of the patterns make use of coreference information so that target is any name in text which is coreferent with sought person. In order to implement the dp element in the patterns we use the information provided by a noun phrase chunker. The following is one of the JAPE rules for identifying key phrases as implemented in our system:",
"cite_spans": [
{
"start": 78,
"end": 103,
"text": "(Cunningham et al., 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "({TargetPerson} ({ Token.string == \"is\" } | {Token.string == \"was\" }) {NounChunk}):annotate --> :annotate.KeyPhrase = {}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "where TargetPerson is the sought entity, and NounChunk is a noun chunk. The rule states that when the pattern is found, a KeyPhrase should be created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "Some examples of these patterns in text are shown in Table 4 . A profile-based summarization system which uses these patterns to create person profiles is reported in (Saggion and Gaizauskas, 2005) .",
"cite_spans": [
{
"start": 167,
"end": 197,
"text": "(Saggion and Gaizauskas, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "Patterns target (is | was |...) (a | an | the) dp target, (who | whose | ...) target, (a | the | one ...) dp target, dp target's target and others ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Processing Technology",
"sec_num": "4"
},
{
"text": "Using language resources creation modules from the summarization tool, two frequency tables are created for each document set (or person) on-the-fly: (i) an inverted document frequency table for words (no normalisation is applied); and (ii) an inverted frequency table for Mentions (the full entity string is used, no normalisation is applied).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency Information",
"sec_num": "4.1"
},
{
"text": "Statistics (term frequencies (tf(Term)) and inverted document frequencies (idf(Term))) are computed over tokens and Mentions using tools from the summarization toolkit (see examples in Table 3 Using these tables vector representations are created for each document (same as in (Bagga and Baldwin, 1998) ). We use the following formula to compute term weight (N is the number of documents in the input set):",
"cite_spans": [
{
"start": 277,
"end": 302,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Frequency Information",
"sec_num": "4.1"
},
{
"text": "weight(Term) = tf(Term) * log2( N idf(Term) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency Information",
"sec_num": "4.1"
},
{
"text": "These vectors are also stored in the GATE documents. Two types of representations were considered for these experiments: (i) full document or summary (terms in the summary are considered for vector creation); and (ii) words are used as terms or Mentions are used as terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency Information",
"sec_num": "4.1"
},
{
"text": "In this section we present results of six different configurations of the clustering algorithm. The configurations are composed of two parts one which indicates where the terms are extracted from and the second part indicates what type of terms were used. The text conditions are as follows: Full Document (FD) condition means that the whole document was used for extracting terms for vector creation; Person Summary (PS) means that sentences containing the target person name were used to extract terms for vector creation; Descriptive Phrase (DP) means that sentences containing a descriptive patterns were used to extract terms for vector creation. The term conditions are: Words (W) words were used as terms and Mentions (M) named entities were used as terms. Local inverted term frequencies were used to weight the terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-document Coreference Systems",
"sec_num": "5"
},
{
"text": "The best system in SemEval 2007 obtained an Fscore of 0.78, the average F-score of all 16 participant systems is 0.60. Baseline one-in-one has an F-score of 0.61 and baseline all-in-one an F-score of 0.40. Results for our system configurations are presented in Table 4 . Our best configuration (FD+W) obtains an F-score of 0.74 (or a fourth position in the SemEval ranking). All our configurations obtained F-scores greater than the average of 0.60 of all participant systems. They also perform better than the two baselines. Our optimal configurations (FD+W and PS+W) both perform similarly with respect to F-score.",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "SemEval 2007 Web People Search Results",
"sec_num": "6"
},
{
"text": "While the full document condition favours \"inverse purity\", summary condition favours \"purity\". As one may expect, the use of descriptive phrases to create summaries has the effect of increasing purity to one extreme, these expressions are far too restrictive to capture all necessary information for disambiguation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval 2007 Web People Search Results",
"sec_num": "6"
},
{
"text": "While these results are rather encouraging, they were not optimal. In particular, we were surprised that semantic information performed worst than a simple word-based approach. We decided to investigate whether some types of semantic information might be more helpful than others in the clustering process. We therefore created one vector for each type of information: Organization, Person, Location, Date, Address in each document and reclustered all test data using one type at a time, without modifying any of the system parameters (e.g., without re-training). The results were very encouraging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic-based Experiments",
"sec_num": "7"
},
{
"text": "Results of semantic-based clustering per information type are presented in Tables 5 and 6 Table 6 : Results for summary condition and different semantic information types. Improvements over PS+M are reported.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 89,
"text": "Tables 5 and 6",
"ref_id": "TABREF7"
},
{
"start": 90,
"end": 97,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.1"
},
{
"text": "in the tables reports results for clustering using one type of information alone. Table 5 reports results for semantic information with full text condition and it is therefore compared to our configuration FD+M which also uses full text condition together with semantic information. The last column in the table shows improvements over that configuration. Using Organization type of information in full text condition, not only outperforms the previous system by ten points, also exceeds by a fraction of a point the best system in SemEval 2007 (one point if we consider macro averaged F-score). Statistical tests (ttest) show that improvement over FD+M is statistically significant. Other semantic types of information also have improved performance, not all of them however. Location and Date in the full documents are probably too ambiguous to help disambiguating the target named entity. Table 6 reports results for semantic information with summary text condition (only personal summaries were tried, experiments using descriptive phrases are underway) and it is therefore compared to our configuration PS+M which also uses summary condition together with semantic information. The last column in the table shows improvements over that configuration. Here all semantic types of information taken individually outperform a system which uses the combination of all types. This is probably because all types of information in a personal summary are somehow related to the target person.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 892,
"end": 899,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.1"
},
{
"text": "Following (Popescu and Magnini, 2007) , we present purity, inverse purity, and F-score results for all our configurations per category (ACL, US Census, Wikipedia) in the test set.",
"cite_spans": [
{
"start": 10,
"end": 37,
"text": "(Popescu and Magnini, 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results per Person Set",
"sec_num": "7.2"
},
{
"text": "In Tables 7, 8, While the Organization type of entity worked better overall, it is not optimal across different categories of people. Note for example that very good results are obtained for the Wikipedia and US Census sets, but rather poor results for the ACL set, where a technique which relies on using full documents and words for document representations works better. These results show that more work is needed before reaching any conclusions on the best document representation for our algorithm in this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 15,
"text": "Tables 7, 8,",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results per Person Set",
"sec_num": "7.2"
},
{
"text": "The problem of cross-document coreference has been studied for a number of years now. Bagga and Baldwin (Bagga and Baldwin, 1998) used the vector space model together with summarization techniques to tackle the cross-document coreference problem. Their approach uses vector representations following a bag-of-words approach. Terms for vector representation are obtained from sentences where the target person appears. They have not presented an analysis of the impact of full document versus summary condition and their clustering algorithm is rather under-specified. Here we have presented a clearer picture of the influence of summary vs full document condition in the clustering process. Mann and Yarowsky (Mann and Yarowsky, 2003) used semantic information extracted from documents referring to the target person in an hierarchical agglomerative clustering algorithm. Semantic information here refers to factual information about a person such as the date of birth, professional career or education. Information is extracted using patterns some of them manually developed and others induced from examples. We differ from this approach in that our semantic information is more general and is not particularly related -although it might be -to the target person.",
"cite_spans": [
{
"start": 104,
"end": 129,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 709,
"end": 734,
"text": "(Mann and Yarowsky, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Phan el al. (Phan et al., 2006) follow Mann and Yarowsky in their use of a kind of biographical information about a person. They use a machine learning algorithm to classify sentences according to particular information types in order to automatically construct a person profile. Instead of comparing biographical information in the person profile altogether as in (Mann and Yarowsky, 2003) , they compare each type of information independently of each other, combining them only to make the final decision. Finally, the best SemEval 2007 Web People Search system (Chen and Martin, 2007) used techniques similar to ours: named entity recognition using off-the-shelf systems. However in addition to semantic information and full document condition they also explore the use of contextual information such as the url where the document comes from. They show that this information is of little help. Our improved system obtained a slightly higher macroaveraged f-score over their system.",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "(Phan et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 39,
"end": 47,
"text": "Mann and",
"ref_id": null
},
{
"start": 365,
"end": 390,
"text": "(Mann and Yarowsky, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 564,
"end": 587,
"text": "(Chen and Martin, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We have presented experiments on cross-document coreference of person names in the context of the first SemEval 2007 Web People Search task. We have designed and implemented a solution which uses an in-house clustering algorithm and available extraction and summarization techniques to produce representations needed by the clustering algorithm. We have presented different approaches and compared them with SemEval evaluation's results. We have also shown that one system which uses one specific type of semantic information achieves stateof-the-art performance. However, more work is needed, in order to understand variation in performance from one data set to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "9"
},
{
"text": "Many avenues of improvement are expected. Where extraction technology is concerned, we have used an off-the-shelf system which is probably not the most appropriate for the type of data we are dealing with, and so adaptation is needed here. With respect to the clustering algorithm we plan to carry out further experiments to test the effect of different similarity metrics, different merging criteria including creation of cluster centroids, and cluster distances; with respect to the summarization techniques we intend to investigate how the extraction of sentences containing pronouns referring to the target entity affects performance, our current version only exploits name coreference. Our future work will also explore how (and if) the use of contextual information available on the web can lead to better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "9"
},
{
"text": "http://gate.ac.uk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are indebted to the three anonymous reviewers for their extensive suggestions that helped improve this work. This work was partially supported by the EU-funded MUSING project (IST-2004-027097).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The SemEval-2007 WePS Evaluation: Establishing a benchmark for Web People Search Task",
"authors": [
{
"first": "J",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gonzalo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Semeval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Artiles, J. Gonzalo, and S. Sekine. 2007. The SemEval-2007 WePS Evaluation: Establishing a benchmark for Web People Search Task. In Proceed- ings of Semeval 2007, Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Entity-Based Cross-Document Coreferencing Using the Vector Space Model",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL'98)",
"volume": "",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bagga and B. Baldwin. 1998. Entity-Based Cross- Document Coreferencing Using the Vector Space Model. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL'98), pages 79-85.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cu-comsem: Exploring rich features for unsupervised web personal named disambiguation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval 2007, Assocciation for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "125--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Chen and J.H. Martin. 2007. Cu-comsem: Explor- ing rich features for unsupervised web personal named disambiguation. In Proceedings of SemEval 2007, As- socciation for Computational Linguistics, pages 125- 128.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Tablan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. 2002. GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguis- tics (ACL'02).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scatter/gather: A clusterbased approach to browsing large document collections",
"authors": [
{
"first": "Douglass",
"middle": [
"R"
],
"last": "Cutting",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Pedersen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Karger",
"suffix": ""
},
{
"first": "John",
"middle": [
"W"
],
"last": "Tukey",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Fifteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "318--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglass R. Cutting, Jan O. Pedersen, David Karger, and John W. Tukey. 1992. Scatter/gather: A cluster- based approach to browsing large document collec- tions. In Proceedings of the Fifteenth Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 318-329.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "WordNet improves text document clustering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hotho",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stumme",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the SIGIR 2003 Semantic Web Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Hotho, S. Staab, and G. Stumme. 2003. WordNet im- proves text document clustering. In Proc. of the SIGIR 2003 Semantic Web Workshop.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Retrieving Descriptive Phrases from Large Amounts of Free Text",
"authors": [
{
"first": "H",
"middle": [],
"last": "Joho",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanderson",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Conference on Information and Knoweldge Management (CIKM)",
"volume": "",
"issue": "",
"pages": "180--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Joho and M. Sanderson. 2000. Retrieving Descrip- tive Phrases from Large Amounts of Free Text. In Proceedings of Conference on Information and Know- eldge Management (CIKM), pages 180-186. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised personal name disambiguation",
"authors": [
{
"first": "G",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 7 th Conference on Natural Language Learning (CoNLL-2003)",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. S. Mann and D. Yarowsky. 2003. Unsupervised per- sonal name disambiguation. In W. Daelemans and M. Osborne, editors, Proceedings of the 7 th Confer- ence on Natural Language Learning (CoNLL-2003), pages 33-40. Edmonton, Canada, May.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards a semantic extraction of named entities",
"authors": [
{
"first": "D",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP'03)",
"volume": "",
"issue": "",
"pages": "255--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Maynard, K. Bontcheva, and H. Cunningham. 2003. Towards a semantic extraction of named entities. In G. Angelova, K. Bontcheva, R. Mitkov, N. Nicolov, and N. Nikolov, editors, Proceedings of Recent Advances in Natural Language Processing (RANLP'03), pages 255-261, Borovets, Bulgaria, Sep. http://gate.ac.uk/sale/ranlp03/ranlp03.pdf.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Personal name resolution crossover documents by a semantics-based approach",
"authors": [
{
"first": "X.-H",
"middle": [],
"last": "Phan",
"suffix": ""
},
{
"first": "L.-M",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Horiguchi",
"suffix": ""
}
],
"year": 2006,
"venue": "IEICE Trans. Inf. & Syst",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X.-H. Phan, L.-M. Nguyen, and S. Horiguchi. 2006. Personal name resolution crossover documents by a semantics-based approach. IEICE Trans. Inf. & Syst., Feb 2006.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Irst-bp: Web people search using name entities",
"authors": [
{
"first": "Octavian",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "195--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavian Popescu and Bernardo Magnini. 2007. Irst-bp: Web people search using name entities. In Proceed- ings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 195-198, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Non-hierarchical document clustering using the icl distribution array processor",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Willett",
"suffix": ""
}
],
"year": 1987,
"venue": "SIGIR '87: Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Rasmussen and P. Willett. 1987. Non-hierarchical document clustering using the icl distribution ar- ray processor. In SIGIR '87: Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval, pages 132-139, New York, NY, USA. ACM Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-document summarization by cluster/profile relevance and redundancy removal",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Document Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Saggion and R. Gaizauskas. 2004. Multi-document summarization by cluster/profile relevance and redun- dancy removal. In Proceedings of the Document Un- derstanding Conference 2004. NIST.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Experiments on statistical and pattern-based biographical summarization",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EPIA 2005",
"volume": "",
"issue": "",
"pages": "611--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Saggion and R. Gaizauskas. 2005. Experiments on statistical and pattern-based biographical summariza- tion. In Proceedings of EPIA 2005, pages 611-621.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Shallow-based Robust Summarization",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2002,
"venue": "Automatic Summarization: Solutions and Perspectives",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Saggion. 2002. Shallow-based Robust Summariza- tion. In Automatic Summarization: Solutions and Per- spectives, ATALA, December, 14.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Shef: Semantic tagging and summarization techniques applied to cross-document coreference",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval 2007, Assocciation for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "292--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Saggion. 2007. Shef: Semantic tagging and summa- rization techniques applied to cross-document corefer- ence. In Proceedings of SemEval 2007, Assocciation for Computational Linguistics, pages 292-295.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recent trends in hierarchic document clustering: A critical review",
"authors": [
{
"first": "P",
"middle": [],
"last": "Willett",
"suffix": ""
}
],
"year": 1988,
"venue": "Information Processing & Management",
"volume": "24",
"issue": "5",
"pages": "577--597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Willett. 1988. Recent trends in hierarchic document clustering: A critical review. Information Processing & Management, 24(5):577-597.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Dickson's James Hamilton, 1st earl of Arran</td></tr><tr><td>James Davidson, MD, Sports Medicine Orthope-</td></tr><tr><td>dic Surgeon, Phoenix Arizona</td></tr><tr><td>As adjutant general, Davidson was chief of the</td></tr><tr><td>State Police, qv which he organized quickly.</td></tr></table>",
"num": null,
"text": "Set of patterns for identifying profile information. invention, the Kinetoscope, was simple: a strip of several images was passed in front of an illuminated lens and behind a spinning wheel.",
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "Descriptive phrases in test documents for different target names.",
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">word frequencies Mention frequencies</td></tr><tr><td>of (92)</td><td>Jerry Hobbs (80)</td></tr><tr><td>Hobbs (92)</td><td>Hobbs (56)</td></tr><tr><td>Jerry (90)</td><td>Krystal Tobias (38)</td></tr><tr><td>to (89)</td><td>Texas (37)</td></tr><tr><td>in (87)</td><td>Jerry (36)</td></tr><tr><td>and (86)</td><td>Laura Hobbs (35)</td></tr><tr><td>the (85)</td><td>Monday (34)</td></tr><tr><td>a (85)</td><td>1990 (31)</td></tr></table>",
"num": null,
"text": ").",
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"text": "Examples of top frequent terms (words and named entities) and their frequencies in the Jerry Hobbs set.",
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table/>",
"num": null,
"text": "Results for different clustering configurations. These results are those obtained on the whole set of 30 person names.",
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"4\">Semantic Type Purity Inv.Purity F-Score</td><td>+/-</td></tr><tr><td>Person</td><td>0.85</td><td>0.64</td><td colspan=\"2\">0.70 +0.06</td></tr><tr><td>Organization</td><td>0.97</td><td>0.57</td><td colspan=\"2\">0.69 +0.05</td></tr><tr><td>Date</td><td>0.87</td><td>0.60</td><td colspan=\"2\">0.68 +0.04</td></tr><tr><td>Location</td><td>0.82</td><td>0.63</td><td colspan=\"2\">0.67 +0.03</td></tr><tr><td>Address</td><td>0.93</td><td>0.54</td><td colspan=\"2\">0.65 +0.01</td></tr></table>",
"num": null,
"text": "Results for full document condition and different semantic information types. Improvements over FD+M are reported.",
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table><tr><td colspan=\"2\">Configuration Set</td><td colspan=\"3\">Purity I.Purity F-Score</td></tr><tr><td>FD+Address</td><td>ACL</td><td>0.86</td><td>0.48</td><td>0.57</td></tr><tr><td>FD+Address</td><td>US C.</td><td>0.81</td><td>0.71</td><td>0.75</td></tr><tr><td>FD+Address</td><td>Wikip.</td><td>0.78</td><td>0.70</td><td>0.73</td></tr><tr><td>PS+Address</td><td>ACL</td><td>0.96</td><td>0.38</td><td>0.50</td></tr><tr><td>PS+Address</td><td>US C.</td><td>0.94</td><td>0.61</td><td>0.72</td></tr><tr><td>PS+Address</td><td>Wikip.</td><td>0.88</td><td>0.62</td><td>0.71</td></tr><tr><td>FD+Date</td><td>ACL</td><td>0.63</td><td>0.82</td><td>0.69</td></tr><tr><td>FD+Date</td><td>US C.</td><td>0.52</td><td>0.87</td><td>0.64</td></tr><tr><td>FD+Date</td><td>Wikip.</td><td>0.59</td><td>0.85</td><td>0.68</td></tr><tr><td>PS+Date</td><td>ACL</td><td>0.88</td><td>0.49</td><td>0.59</td></tr><tr><td>PS+Date</td><td>US C.</td><td>0.88</td><td>0.64</td><td>0.72</td></tr><tr><td>PS+Date</td><td>Wikip.</td><td>0.84</td><td>0.67</td><td>0.72</td></tr><tr><td>FD+Location</td><td>ACL</td><td>0.63</td><td>0.78</td><td>0.65</td></tr><tr><td>FD+Location</td><td>US C.</td><td>0.52</td><td>0.86</td><td>0.64</td></tr><tr><td>FD+Location</td><td>Wikip.</td><td>0.49</td><td>0.91</td><td>0.62</td></tr><tr><td>PS+Location</td><td>ACL</td><td>0.87</td><td>0.47</td><td>0.54</td></tr><tr><td>PS+Location</td><td>US C.</td><td>0.85</td><td>0.66</td><td>0.73</td></tr><tr><td>PS+Location</td><td>Wikip.</td><td>0.74</td><td>0.75</td><td>0.72</td></tr></table>",
"num": null,
"text": "and 9, results are reported for full",
"type_str": "table",
"html": null
},
"TABREF9": {
"content": "<table><tr><td colspan=\"5\">: Results for clustering configurations per</td></tr><tr><td colspan=\"5\">person type set (ACL, US Census, and Wikipedia)</td></tr><tr><td>-Part I.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Configuration Set</td><td colspan=\"3\">Purity I.Purity F-Score</td></tr><tr><td>FD+Org.</td><td>ACL</td><td>0.92</td><td>0.57</td><td>0.69</td></tr><tr><td>FD+Org.</td><td>US C.</td><td>0.87</td><td>0.78</td><td>0.82</td></tr><tr><td>FD+Org.</td><td>Wikip.</td><td>0.88</td><td>0.79</td><td>0.83</td></tr><tr><td>PS+Org.</td><td>ACL</td><td>0.98</td><td>0.42</td><td>0.54</td></tr><tr><td>PS+Org.</td><td>US C.</td><td>0.95</td><td>0.63</td><td>0.74</td></tr><tr><td>PS+Org.</td><td>Wikip.</td><td>0.96</td><td>0.65</td><td>0.77</td></tr><tr><td>FD+Person</td><td>ACL</td><td>0.82</td><td>0.66</td><td>0.72</td></tr><tr><td>FD+Person</td><td>US C.</td><td>0.81</td><td>0.74</td><td>0.76</td></tr><tr><td>FD+Person</td><td>Wikip.</td><td>0.77</td><td>0.75</td><td>0.75</td></tr><tr><td>PS+Person</td><td>ACL</td><td>0.86</td><td>0.53</td><td>0.63</td></tr><tr><td>PS+Person</td><td>US C.</td><td>0.85</td><td>0.6721</td><td>0.73</td></tr><tr><td>PS+Person</td><td>Wikip.</td><td>0.82</td><td>0.70</td><td>0.73</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table><tr><td>: Results for clustering configurations per</td></tr><tr><td>person type set (ACL, US Census, and Wikipedia)</td></tr><tr><td>-Part II.</td></tr><tr><td>document condition(FD), summary condition (PS),</td></tr><tr><td>word-based representation (W), mention representa-</td></tr><tr><td>tion (M) -i.e. all types of named entities, and five</td></tr><tr><td>different mention types: Person, Location, Organi-</td></tr><tr><td>zation, Date, and Address.</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF12": {
"content": "<table><tr><td>: Results for clustering configurations per</td></tr><tr><td>person type set (ACL, US Census, and Wikipedia)</td></tr><tr><td>-Part III.</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
}
}
}
}