Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D07-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:18:45.992829Z"
},
"title": "Topic Segmentation with Hybrid Document Indexing",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Matveeva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Chicago Chicago",
"location": {
"postCode": "60637",
"region": "IL"
}
},
"email": "matveeva@cs.uchicago.edu"
},
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Chicago Chicago",
"location": {
"postCode": "60637",
"region": "IL"
}
},
"email": "levow@cs.uchicago.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a domain-independent unsupervised topic segmentation approach based on hybrid document indexing. Lexical chains have been successfully employed to evaluate lexical cohesion of text segments and to predict topic boundaries. Our approach is based in the notion of semantic cohesion. It uses spectral embedding to estimate semantic association between content nouns over a span of multiple text segments. Our method significantly outperforms the baseline on the topic segmentation task and achieves performance comparable to state-of-the-art methods that incorporate domain specific information.",
"pdf_parse": {
"paper_id": "D07-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a domain-independent unsupervised topic segmentation approach based on hybrid document indexing. Lexical chains have been successfully employed to evaluate lexical cohesion of text segments and to predict topic boundaries. Our approach is based in the notion of semantic cohesion. It uses spectral embedding to estimate semantic association between content nouns over a span of multiple text segments. Our method significantly outperforms the baseline on the topic segmentation task and achieves performance comparable to state-of-the-art methods that incorporate domain specific information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of topic segmentation is to discover story boundaries in the stream of text or audio recordings. Story is broadly defined as segment of text containing topically related sentences. In particular, the task may require segmenting a stream of broadcast news, addressed by the Topic Detection and Tracking (TDT) evaluation project (Wayne, 2000; Allan, 2002) . In this case topically related sentences belong to the same news story. While we are considering TDT data sets in this paper, we would like to pose the problem more broadly and consider a domainindependent approach to topic segmentation.",
"cite_spans": [
{
"start": 336,
"end": 349,
"text": "(Wayne, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 350,
"end": 362,
"text": "Allan, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous research on topic segmentation has shown that lexical coherence is a reliable indicator of topical relatedness. Therefore, many approaches have concentrated on different ways of estimating lexical coherence of text segments, such as semantic similarity between words (Kozima, 1993) , similarity between blocks of text (Hearst, 1994) , and adaptive language models (Beeferman et al., 1999) . These approaches use word repetitions to evaluate coherence. Since the sentences covering the same story represent a coherent discourse segment, they typically contain the same or related words. Repeated words build lexical chains that are consequently used to estimate lexical coherence. This can be done either by analyzing the number of overlapping lexical chains (Hearst, 1994) or by building a short-range and long-range language model (Beeferman et al., 1999) . More recently, topic segmentation with lexical chains has been successfully applied to segmentation of news stories, multi-party conversation and audio recordings (Galley et al., 2003) .",
"cite_spans": [
{
"start": 276,
"end": 290,
"text": "(Kozima, 1993)",
"ref_id": "BIBREF11"
},
{
"start": 327,
"end": 341,
"text": "(Hearst, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 373,
"end": 397,
"text": "(Beeferman et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 767,
"end": 781,
"text": "(Hearst, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 841,
"end": 865,
"text": "(Beeferman et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 1031,
"end": 1052,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When the task is to segment long streams of text containing stories which may continue at a later point in time, for example developing news stories, building of lexical chains becomes intricate. In addition, the word repetitions do not account for synonymy and semantic relatedness between words and therefore may not be able to discover coherence of segments with little word overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach aims at discovering semantic relatedness beyond word repetition. It is based on the notion of semantic cohesion rather than lexical cohesion. We propose to use a similarity metric between segments of text that takes into account semantic associations between words spanning a number of segments. This method approximates lexical chains by averaging the similarity to a number of previous text segments and accounts for synonymy by using a hybrid document indexing scheme. Our text segmentation experiments show a significant performance improvement over the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 discusses hybrid indexing. Section 3 describes our segmentation algorithm. Section 5 reports the experimental results. We conclude in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the topic segmentation task we would like to define a similarity measure that accounts for synonymy and semantic association between words. This similarity measure will be used to evaluate semantic cohesion between text units and the decrease in semantic cohesion will be used as an indicator of a story boundary. First, we develop a document representation which supports this similarity measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Document Indexing",
"sec_num": "2"
},
{
"text": "Capturing semantic relations between words in a document representation is difficult. Different approaches tried to overcome the term independence assumption of the bag-of-words representation (Salton and McGill, 1983 ) by using distributional term clusters (Slonim and Tishby, 2000) and expanding the document vectors with synonyms, see . Since content words can be combined into semantic classes there has been a considerable interest in low-dimensional representations. Latent Semantic Analysis (LSA) (Deerwester et al., 1990 ) is one of the best known dimensionality reduction algorithms. In the LSA space documents are indexed with latent semantic concepts. LSA maps all words to low dimensional vectors. However, the notion of semantic relatedness is defined differently for subsets of the vocabulary. In addition, the numerical information, abbreviations and the documents' style may be very good indicators of their topic. However, this information is no longer available after the dimensionality reduction.",
"cite_spans": [
{
"start": 193,
"end": 217,
"text": "(Salton and McGill, 1983",
"ref_id": "BIBREF16"
},
{
"start": 258,
"end": 283,
"text": "(Slonim and Tishby, 2000)",
"ref_id": "BIBREF17"
},
{
"start": 504,
"end": 528,
"text": "(Deerwester et al., 1990",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Document Indexing",
"sec_num": "2"
},
{
"text": "We use a hybrid approach to document indexing to address these issues. We keep the notion of latent semantic concepts and also try to preserve the specifics of the document collection. Therefore, we divide the vocabulary into two sets: nouns and the rest of the vocabulary. The set of nouns does not include proper nouns. We use a method of spectral embedding, as described below and compute a low-dimensional representation for documents using only the nouns. We also compute a tf-idf representation for documents using the other set of words. Since we can treat each latent semantic concept in the low-dimensional representation as part of the vocabulary, we combine the two vector representations for each document by concatenating them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Document Indexing",
"sec_num": "2"
},
{
"text": "A vector space representation for documents and sentences is convenient and makes the similarity metrics such as cosine and distance readily available. However, those metrics will not work if they don't have a meaningful linguistic interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Embedding",
"sec_num": "2.1"
},
{
"text": "Spectral methods comprise a family of algorithms that embed terms and documents in a lowdimensional vector space. These methods use pairwise relations between the data points encoded in a similarity matrix. The main step is to find an embedding for the data that preserves the original similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Embedding",
"sec_num": "2.1"
},
{
"text": "GLSA We use Generalized Latent Semantic Analysis (GLSA) (Matveeva et al., 2005) to compute spectral embedding for nouns. GLSA computes term vectors and since we would like to use spectral embedding for nouns, it is well-suited for our approach. GLSA extends the ideas of LSA by defining different ways to obtain the similarities matrix and has been shown to outperform LSA on a number of applications (Matveeva and Levow, 2006) .",
"cite_spans": [
{
"start": 56,
"end": 79,
"text": "(Matveeva et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 401,
"end": 427,
"text": "(Matveeva and Levow, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Embedding",
"sec_num": "2.1"
},
{
"text": "GLSA begins with a matrix of pair-wise term similarities S, computes its eigenvectors U and uses the first k of them to represent terms and documents, for details see (Matveeva et al., 2005) . The justification for this approach is the theorem by Eckart and Young (Golub and Reinsch, 1971) stating that inner product similarities between the term vectors based on the eigenvectors of S represent the best elementwise approximation to the entries in S. In other words, the inner product similarity in the GLSA space preserves the semantic similarities in S.",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Matveeva et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 264,
"end": 289,
"text": "(Golub and Reinsch, 1971)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Embedding",
"sec_num": "2.1"
},
{
"text": "Since our representation will try to preserve semantic similarities in S it is important to have a matrix of similarities which is linguistically motivated. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Embedding",
"sec_num": "2.1"
},
{
"text": "PMI Following (Turney, 2001; Matveeva et al., 2005) , we use point-wise mutual information (PMI) to compute the matrix S. PMI between random variables representing the words w i and w j is computed as",
"cite_spans": [
{
"start": 14,
"end": 28,
"text": "(Turney, 2001;",
"ref_id": "BIBREF18"
},
{
"start": 29,
"end": 51,
"text": "Matveeva et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Term Similarity",
"sec_num": "2.2"
},
{
"text": "P M I(w i , w j ) = log P (W i = 1, W j = 1) P (W i = 1)P (W j = 1) . (1) Thus, for GLSA, S(w i , w j ) = P M I(w i , w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Term Similarity",
"sec_num": "2.2"
},
{
"text": "Co-occurrence Proximity An advantage of PMI is the notion of proximity. The co-occurrence statistics for PMI are typically computed using a sliding window. Thus, PMI will be large only for words that co-occur within a small context of fixed size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Term Similarity",
"sec_num": "2.2"
},
{
"text": "Although GLSA was successfully applied to synonymy induction (Matveeva et al., 2005) , we would like to point out that the GLSA discovers semantic association in a broad sense. Table 1 shows a few words from the TDT2 corpus and their nearest neighbors in the GLSA space. We can see that for \"witness\", \"finance\" and \"broadcast\" words are grouped into corresponding semantic classes. The nearest neighbors for \"hearing\" and \"stay\" represent their different senses. Interestingly, even for the abstract noun \"surprise\" the nearest neighbors are meaningful.",
"cite_spans": [
{
"start": 61,
"end": 84,
"text": "(Matveeva et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Semantic Association vs. Synonymy",
"sec_num": null
},
{
"text": "We have two sets of the vocabulary terms: a set of nouns, N , and the other words, T . We compute tf-idf document vectors indexed with the words in T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i = (\u03b1 i (w 1 ), \u03b1 i (w 2 ), ..., \u03b1 i (w |T | )),",
"eq_num": "(2)"
}
],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "\u03b1 i (w t ) = tf(w t , d i ) * idf(w t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "We also compute a k-dimensional representation with latent concepts c i as a weighted linear combination of GLSA term vectors w t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "d i = (c 1 , ..., c k ) = t=1:|N | \u03b1 i (w t ) * w t , (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "We concatenate these two representations to generate a hybrid indexing of documents:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i = (\u03b1 i (w 1 ), ..., \u03b1 i (w |T | ), c 1 , ...c k )",
"eq_num": "(4)"
}
],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "In our experiments, we compute document and sentence representation using three indexing schemes: the tf-idf baseline, the GLSA representation and the hybrid indexing. The GLSA indexing computes term vectors for all vocabulary words; document and sentence vectors are generated as linear combinations of term vectors, as shown above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Indexing",
"sec_num": "2.3"
},
{
"text": "One can define document similarity at different levels of semantic content. Documents can be similar because they discuss the same people or events and because they discuss related subjects and contain semantically related words. Hybrid Indexing allows us to combine both definitions of similarity. Each representation supports a different similarity measure. tf-idf uses term-matching, the GLSA representation uses semantic association in the latent semantic space computed for all words, and hybrid indexing uses a combination of both: term-matching for named entities and content words other than nouns combined with semantic association for nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "In the GLSA space, the inner product between document vectors contains all pair-wise inner product between their words, which allows one to detect semantic similarity beyond term matching:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i , d j = w\u2208d i v\u2208d j \u03b1 i (w)\u03b1 j (v) w, v",
"eq_num": "(5)"
}
],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "If documents contain words which are different but semantically related, the inner product between the term vectors will contribute to the document similarity, as illustrated with an example in section 5. When we compare two documents indexed with the hybrid indexing scheme, we compute a combination of similarity measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "d i , d j = n k \u2208d i nm\u2208d j \u03b1 i (n k )\u03b1 j (n m ) n k , n m + t\u2208T \u03b1 i (t) * \u03b1 j (t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "(6) Document similarity contains semantic association between all pairs of nouns and uses term-matching for the rest of the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document similarity",
"sec_num": "2.4"
},
{
"text": "Our approach to topic segmentation is based on semantic cohesion supported by the hybrid indexing. Topic segmentation approaches use either sentences (Galley et al., 2003) or blocks of words as text units (Hearst, 1994) . We used both variants in our experiments. When using blocks, we computed blocks of a fixed size (typically 20 words) sliding over the documents in a fixed step size (10 or 5 words). The algorithm predicts a story boundary when the semantic cohesion between two consecutive units drops. Blocks can cross story boundaries, thus many predicted boundaries will be displaced with respect to the actual boundary.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 205,
"end": 219,
"text": "(Hearst, 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "Averaged similarity In our preliminary experiments we used the largest difference in score to predict story boundary, following the TextTiling approach (Hearst, 1994) . We found, however, that in our document collection the word overlap between sentences was often not large and pair-wise similarity could drop to zero even for sentences within the same story, as will be illustrated below. We could not obtain satisfactory results with this approach. Therefore, we used the average similarity by using a history of fixed size n. The semantic cohesion score was computed for the position between two text units, t i and t j as follows:",
"cite_spans": [
{
"start": 152,
"end": 166,
"text": "(Hearst, 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "score(t i , t j ) = 1 n n\u22121 k=0 t i\u2212k , t j (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "Our approach predicts story boundaries at the minima of the semantic cohesion score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "Approximating Lexical Chains One of the motivations for our cohesion score is that it approximates lexical chains, as for example in (Galley et al., 2003) . Galley et al. (Galley et al., 2003) define lexical chains R 1 , .., R N by considering repetitions of terms t 1 , .., t N and assigning larger weights to short and compact chains. Then the lexical cohesion score between two text units t i and t j is based on the number of chains that overlap both of them:",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 157,
"end": 192,
"text": "Galley et al. (Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(t i , t j ) = N k=1 w k (t i )w k (t j ),",
"eq_num": "(8)"
}
],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "where w k (t i ) = score(R j ) if the chain R j overlaps t i and zero otherwise. Our cohesion score takes into account only the chains for words that occur in t j and have another occurrence within n previous sentences. Due to this simplification, we compute the score based on inner products. Once we make the transition to inner products, we can use hybrid indexing and compute semantic cohesion score beyond term repetition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation with Semantic Cohesion",
"sec_num": "3"
},
{
"text": "We compare our approach to the LCseg algorithm which uses lexical chains to estimate topic boundaries (Galley et al., 2003) . Hybrid indexing allows us to compute semantic cohesion score rather than the lexical cohesion score based on word repetitions. Choi at al. used LSA for segmentation (Choi et al., 2001) . LSA (Deerwester et al., 1990 ) is a special case of spectral embedding and Choi at al. (Choi et al., 2001 ) used all vocabulary words to compute low-dimensional document vectors. We use GLSA (Matveeva et al., 2005) because it computes term vectors as opposed to the dual document-term representation with LSA and uses a different matrix of pair-wise similarities. Furthermore, Choi at al. (Choi et al., 2001 ) used clustering to predict boundaries whereas we used the average similarity scores. s1: The Cuban news agency Prensa Latina called Clinton 's announcement Friday that Cubans picked up at sea will be taken to Guantanamo Bay naval base a \" new and dangerous element \" in U S immigration policy. s2: The Cuban government has not yet publicly reacted to Clinton 's announcement that Cuban rafters will be turned away from the United States and taken to the U S base on the southeast tip of Cuba. s5: The arrival of Cuban emigrants could be an \" extraordinary aggravation \" to the situation , Prensa Latina said. s6: It noted that Cuba had already denounced the use of the base as a camp for Haitian refugees.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 291,
"end": 310,
"text": "(Choi et al., 2001)",
"ref_id": "BIBREF3"
},
{
"start": 313,
"end": 341,
"text": "LSA (Deerwester et al., 1990",
"ref_id": null
},
{
"start": 400,
"end": 418,
"text": "(Choi et al., 2001",
"ref_id": "BIBREF3"
},
{
"start": 504,
"end": 527,
"text": "(Matveeva et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 702,
"end": 720,
"text": "(Choi et al., 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Approaches",
"sec_num": "4"
},
{
"text": "whom it had for many years encouraged to come to the United States. s8: Cuba considers the land at the naval base , leased to the United States at the turn of the century, to be illegally occupied. s10: General Motors Corp said Friday it was recalling 5,600 1993-94 model Chevrolet Lumina, Pontiac Trans Sport and Oldsmobile Silhouette minivans equipped with a power sliding door and built-in child seats. s14: If this occurs , the shoulder belt may not properly retract , the carmaker said. s15: GM is the only company to offer the power-sliding door.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Approaches",
"sec_num": "4"
},
{
"text": "s16: The company said it was not aware of any accidents or injuries related to the defect. s17: To correct the problem , GM said dealers will install a modified interior trim piece that will reroute the seat belt. Existing approaches to hybrid indexing used different weights for proper nouns, nouns phrase heads and use WordNet synonyms to expand the documents, for example (Hatzivassiloglou et al., 2000; Hatzivassiloglou et al., 2001 ). Our approach does not require linguistic resources and learning the weights. The semantic associations between nouns are estimated using spectral embedding.",
"cite_spans": [
{
"start": 375,
"end": 406,
"text": "(Hatzivassiloglou et al., 2000;",
"ref_id": "BIBREF8"
},
{
"start": 407,
"end": 436,
"text": "Hatzivassiloglou et al., 2001",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Approaches",
"sec_num": "4"
},
{
"text": "The first TDT collection is part of the LCseg toolkit 1 (Galley et al., 2003) and we used it to compare our approach to LCseg. We used the part of this collection with 50 files with 22 documents each.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We also used the TDT2 collection 2 of news articles from six news agencies in 1998. We used only 9,738 documents that are assigned to one topic and have length more than 50 words. We used the Lemur toolkit 3 with stemming and stop words list for the tf-idf indexing; we used Bikel's parser 4 to obtain the POS-tags and select nouns; we used the PLA-PACK package (Bientinesi et al., 2003) to compute the eigenvalue decomposition.",
"cite_spans": [
{
"start": 362,
"end": 387,
"text": "(Bientinesi et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "1 http://www1.cs.columbia.edu/ galley/tools.html 2 http://nist.gov/speech/tests/tdt/tdt98/ 3 http://www.lemurproject.org/ 4 http://www.cis.upenn.edu/ dbikel/software.html Evaluation For the TDT data we use the error metric p k (Beeferman et al., 1999) and WindowDiff (Pevzner and Hearst, 2002) which are implemented in the LCseg toolkit. We also used the TDT cost metric Cseg 5 , with the default parameters P(seg)=0.3, Cmiss=1, Cfa=0.3 and distance of 50 words. All these measures look at two units (words or sentences) N units apart and evaluate how well the algorithm can predict whether there is a boundary between them or not. Lower values mean better performance for all measures.",
"cite_spans": [
{
"start": 227,
"end": 251,
"text": "(Beeferman et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 267,
"end": 293,
"text": "(Pevzner and Hearst, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "To obtain the PMI values we used the TDT2 collection, denoted as GLSA local . Since co-occurrence statistics based on larger collections give a better approximation to linguistic similarities, we also used 700,000 documents from the English GigaWord collection, denoted as GLSA. We used a window of size 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global vs. Local GLSA Similarity",
"sec_num": null
},
{
"text": "The first set of experiments was designed to evaluate the advantage of the GLSA representation over the baseline. We compare our approach to the LCseg algorithm (Galley et al., 2003) and use sentences as segmentation unit. To avoid the issue of parameters setting when the number of boundaries is not known, we provide each algorithm with the actual numbers TDT We use the LCseg approach and our approach with the baseline tf-idf representation and the GLSA representation to segment this corpus. Table 2 shows a few sentences. Many content words are repeated, so the lexical chains is definitely a sound approach. As shown in Table 2 , in the first story the word \"Cuba\" or \"Cuban\" is repeated in every sentence thus generating a lexical chain. On the topic boundary, the word overlap between sentences is very small. At the same time, the repetition of words may also be interrupted within a story: sentence 5, 6 and sentences 14, 15, 16 have little word overlap. LCseg deals with this by defining several parameters to control chain length and gaps. This simple example illustrates the potential benefit of semantic cohesion. Table 2 shows that \"General Motors\" or \"GM\" are not repeated in every sentence of the second story. However, \"GM\", \"carmaker\" and \"company\" are semantically related. Making this information available to the segmentation algorithm allows it to establish a connection between each sentence of the second story.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1129,
"end": 1136,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "We computed pair-wise sentence similarities between pairs of consecutive sentences in the tf-idf and GLSA representations. Figure 1 shows the similarity values plotted for each sentence break. The pairwise similarities based on term-matching are very spiky and there are many zeros within the story. The GLSA-based similarity makes the dips in the similarities at the boundaries more prominent. The last plot gives the details for the sentences in table 2. In the tf-idf representation sentences without word overlap receive zero similarity but the GLSA representation is able to use the semantic association between between \"emigrants\" and \"refugees\" for sentences 5 and 6, and also the semantic association between \"carmaker\" and \"company\" for sentences 14 Table 3 : TDT segmentation results.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": null
},
{
"start": 759,
"end": 766,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "and 15. This effect increases as we use the semantic cohesion score as in equation 7. Figure 2 shows the similarity values for tf-idf and GLSA and also the lexical cohesion scores computed by LCseg. The GLSAbased similarities are not quite as smooth as the LCseg scores, but they correctly discover the boundaries. LCseg parameters are fine-tuned for this document collection. We used a general TDT2 GLSA representation for this collection, and the only segmentation parameter we used is to avoid placing next boundary within n=3 sentences of the previous one. For this reason the predicted boundary may be one sentence off the actual boundary. These results are summarized in Table 3 . The GLSA representation performs significantly better than the tf-idf baseline. Its p k and WindowDiff scores with default parameters for LCseg are worse than for LCseg. We attribute it to the fact that we did not fine-tuned our method to this collection and that boundaries are often placed one position off the actual boundary.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 2",
"ref_id": null
},
{
"start": 677,
"end": 684,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "TDT2 For this collection we used three different indexing schemes: the tf-idf baseline, the GLSA representation and the hybrid indexing. Each representation supports a different similarity measure. Our TDT experiments showed that the semantic cohesion score based on the GLSA representation improves the segmentation results. The variant of the TDT corpus we used is rather small and wellbalanced, see (Galley et al., 2003) for details. In the second phase of experiments we evaluate our approach on the larger TDT2 corpus. The experiments were designed to address the following issues:",
"cite_spans": [
{
"start": 402,
"end": 423,
"text": "(Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "\u2022 performance comparison between GLSA and Hybrid indexing representations. As mentioned before, GLSA embeds all words in a low-dimensional space. Whereas semantic Table 4 : TDT2 segmentation results. Sliding blocks with size 20 and stepsize 10; similarity averaged over 10 preceeding blocks.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "classes for nouns have theoretical linguistic justification, it is harder to motivate a latent space representation for example for proper nouns. Therefore, we want to evaluate the advantage of using spectral embedding only for nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "\u2022 collection dependence of similarities. The similarity matrix S is computed using the TDT2 corpus (GLSA local ) and using the larger Giga-Word corpus. The larger corpus provides more reliable co-occurrence statistics. On the other hand, word distribution is different from that in the TDT2 corpus. We wanted to evaluate whether semantic similarities are collection independent. Table 4 shows the performance evaluation. We show the results computed using blocks containing 20 words (after preprocessing) with step size 10. We tried other parameter values but did not achieve better performance, which is consistent with other research (Hearst, 1994; Galley et al., 2003) . We show the results for two settings: predict a known number of boundaries, and predict boundaries using a threshold. In our experiments we used the average of the smallest N scores as threshold, N = 4000 showing best results.",
"cite_spans": [
{
"start": 636,
"end": 650,
"text": "(Hearst, 1994;",
"ref_id": "BIBREF10"
},
{
"start": 651,
"end": 671,
"text": "Galley et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "The spectral embedding based representations (GLSA, Hybrid) significantly outperform the baseline. This confirms the advantage of the semantic cohesion score vs. term-matching. Hybrid indexing outperforms the GLSA representation supporting our intuition that semantic association is best defined for nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "We used the GigaWord corpus to obtain the pairwise word associations for the GLSA and Hybrid representations. We also computed GLSA local and Hybrid local using the TDT2 corpus to obtain the pair-wise word associations. The co-occurrence statistics based on the GigaWord corpus provide more reliable estimations of semantic association despite the difference in term distribution. The difference is larger for the GLSA case when we compute the embedding for all words, GLSA performs better than GLSA local . Hybrid local performs only slightly worse than Hybrid. This seems to support the claim that semantic associations between nouns are largely collection independent. On the other hand, semantic associations for proper names are collection dependent at least because the collections are static but the semantic relations of proper names may change over time. The semantic space for a name of a president, for example, is different for the period of time of his presidency and for the time before and after that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "Disappointingly, we could not achieve good results with LCseg. It tends to split stories into short paragraphs. Hybrid indexing could achieve results comparable to state-of-the art approaches, see (Fiscus et al., 1998) for an overview.",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Fiscus et al., 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Segmentation",
"sec_num": "5.2"
},
{
"text": "We presented a topic segmentation approach based on semantic cohesion scores. Our approach is domain independent, does not require training or use of lexical resources. The scores are computed based on the hybrid document indexing which uses spectral embedding in the space of latent concepts for nouns and keeps proper nouns and other specifics of the documents collections unchanged. We approximate the lexical chains approach by simplifying the definition of a chain which allows us to use inner products as basis for the similarity score. The similarity score takes into account semantic relations be-tween nouns beyond term matching. This semantic cohesion approach showed good results on the topic segmentation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We intend to extend the hybrid indexing approach by considering more vocabulary subsets. Syntactic similarity is more appropriate for verbs, for example, than co-occurrence. As a next step, we intend to embed verbs using syntactic similarity. It would also be interesting to use lexical chains for proper names and learn the weights for different similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "www.nist.gov/speech/tests/tdt/tdt98/doc/ tdt2.eval.plan.98.v3.7.ps",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Topic Detection and Tracking: Event-based Information Organization",
"authors": [],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Allan, editor. 2002. Topic Detection and Tracking: Event-based Information Organization. Kluwer Aca- demic Publishers.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical models for text segmentation. Machine Learning",
"authors": [
{
"first": "Doug",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "34",
"issue": "",
"pages": "177--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doug Beeferman, Adam Berger, and John D. Lafferty. 1999. Statistical models for text segmentation. Ma- chine Learning, 34(1-3):177-210.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A parallel eigensolver for dense symmetric matrices based on multiple relatively robust representations",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Bientinesi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Inderjit",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"A"
],
"last": "Dhilon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van De Geijn",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolo Bientinesi, Inderjit S. Dhilon, and Robert A. van de Geijn. 2003. A parallel eigensolver for dense sym- metric matrices based on multiple relatively robust representations. UT CS Technical Report TR-03-26.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent Semantic Analysis for text segmentation",
"authors": [
{
"first": "Freddy",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Wiemer-Hastings",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Choi, Peter Wiemer-Hastings, and Johanna Moore. 2001. Latent Semantic Analysis for text seg- mentation. In Proceedings of EMNLP, pages 109- 117.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Indexing by Latent Semantic Analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society of Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott C. Deerwester, Susan T. Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by Latent Semantic Analysis. Jour- nal of the American Society of Information Science, 41(6):391-407.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "NIST's 1998 topic detection and tracking evaluation (tdt2)",
"authors": [
{
"first": "J",
"middle": [
"G"
],
"last": "Fiscus",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "John",
"middle": [
"S"
],
"last": "Garofolo",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of NIST's 1998 Topic Detection and Tracking Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. G. Fiscus, George Doddington, John S. Garofolo, and Alvin Martin. 1998. NIST's 1998 topic detection and tracking evaluation (tdt2). In Proceedings of NIST's 1998 Topic Detection and Tracking Evaluation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Discourse segmentation of multi-party conversation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Galley, K. McKeown, E. Fosler-Lussier, and H. Jing. 2003. Discourse segmentation of multi-party conver- sation. In Proceedings of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Handbook for Matrix Computation II, Linear Algebra",
"authors": [
{
"first": "G",
"middle": [],
"last": "Golub",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Reinsch",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Golub and C. Reinsch. 1971. Handbook for Ma- trix Computation II, Linear Algebra. Springer-Verlag, New York.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An investigation of linguistic features and clustering algorithms for topical document clustering",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
},
{
"first": "Ankineedu",
"middle": [],
"last": "Maganti",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Hatzivassiloglou, Luis Gravano, and Ankineedu Mag- anti. 2000. An investigation of linguistic features and clustering algorithms for topical document clustering. In Proceedings of SIGIR, pages 224-231.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Simfinder: A flexible clustering tool for summarization",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay Min-Yen Kan",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
},
{
"first": "Melissa",
"middle": [
"L"
],
"last": "Holcombe",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Hatzivassiloglou, Regina Barzilay Min-Yen Kan Ju- dith L. Klavans, Melissa L. Holcombe, and Kath- leen R. McKeown. 2001. Simfinder: A flexible clustering tool for summarization. In Proceedings of NAACL, pages 41-49.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-paragraph segmentation of expository text",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of ACL, pages 9-16.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text segmentation based on similarity between words",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Kozima",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "286--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Kozima. 1993. Text segmentation based on sim- ilarity between words. In Proceedings of ACL, pages 286-288.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dictionary-based techniques for cross-language information retrieval. Information Processing and Management: Special Issue on Cross-language Information Retrieval",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow, Douglas W. Oard, and Philip Resnik. 2005. Dictionary-based techniques for cross-language information retrieval. Information Processing and Management: Special Issue on Cross-language Infor- mation Retrieval.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Graphbased Generalized Latent Semantic Analysis for document representation",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Matveeva",
"suffix": ""
},
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the TextGraphs Workshop at HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Matveeva and Gina-Anne Levow. 2006. Graph- based Generalized Latent Semantic Analysis for docu- ment representation. In Proc. of the TextGraphs Work- shop at HLT/NAACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generalized Latent Semantic Analysis for term representation",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Matveeva",
"suffix": ""
},
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
},
{
"first": "Ayman",
"middle": [],
"last": "Farahat",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Royer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Matveeva, Gina-Anne Levow, Ayman Farahat, and Christian Royer. 2005. Generalized Latent Semantic Analysis for term representation. In Proc. of RANLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Comput. Linguist",
"volume": "28",
"issue": "1",
"pages": "19--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmen- tation. Comput. Linguist., 28(1):19-36.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introduction to Modern Information Retrieval",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton and Michael J. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Document clustering using word clusters via the information bottleneck method",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 2000,
"venue": "Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "208--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Slonim and Naftali Tishby. 2000. Document clus- tering using word clusters via the information bottle- neck method. In Research and Development in Infor- mation Retrieval, pages 208-215.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mining the web for synonyms: PMI-IR versus LSA on TOEFL",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2001,
"venue": "Lecture Notes in Computer Science",
"volume": "2167",
"issue": "",
"pages": "491--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. Lecture Notes in Computer Science, 2167:491-502.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Wayne",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "1487--1494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Wayne. 2000. Multilingual topic detection and track- ing: Successful research enabled by corpora and eval- uation. In Proceedings of Language Resources and Evaluation Conference (LREC), pages 1487-1494.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "TDT. Pair-wise sentence similarities for tf-idf (left), GLSA (middle); x-axis shows story boundaries. Details for the first 20 sentences, table 2 (right). TDT. Pair-wise sentence similarities for tf-idf (left), GLSA (middle) averaged over 10 preceeding sentences; LCseg lexical cohesion scores (right). X-axis shows story boundaries. of boundaries."
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>Word</td><td/><td colspan=\"3\">Nearest Neighbors in GLSA Space</td><td/><td/></tr><tr><td>witness</td><td>testify</td><td>prosecutor</td><td>trial</td><td colspan=\"2\">testimony juror</td><td>eyewitness</td></tr><tr><td>finance</td><td>fund</td><td>bank</td><td colspan=\"3\">investment economy crisis</td><td>category</td></tr><tr><td colspan=\"2\">broadcast television</td><td>TV</td><td>satellite</td><td>ABC</td><td>CBS</td><td>radio</td></tr><tr><td>hearing</td><td>hearing</td><td>judge</td><td>voice</td><td>chatter</td><td>sound</td><td>appeal</td></tr><tr><td>surprise</td><td colspan=\"3\">announcement disappointment stunning</td><td>shock</td><td colspan=\"2\">reaction astonishment</td></tr><tr><td>rest</td><td>stay</td><td>remain</td><td>keep</td><td>leave</td><td colspan=\"2\">portion economy</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Words' nearest neighbors in the GLSA semantic space."
},
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "TDT. The first 17 sentences in the first file."
}
}
}
}