{ "paper_id": "I17-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:38:19.395819Z" }, "title": "Information Bottleneck Inspired Method For Chat Text Segmentation", "authors": [ { "first": "S", "middle": [], "last": "Vishal", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research New Delhi", "location": { "country": "India" } }, "email": "s.vishal3@tcs.com" }, { "first": "Mohit", "middle": [], "last": "Yadav", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "" }, { "first": "Lovekesh", "middle": [], "last": "Vig", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research New Delhi", "location": { "country": "India" } }, "email": "lovekesh.vig@tcs.com" }, { "first": "Gautam", "middle": [], "last": "Shroff", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research New Delhi", "location": { "country": "India" } }, "email": "gautam.shroff@tcs.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel technique for segmenting chat conversations using the information bottleneck method (Tishby et al., 2000), augmented with sequential continuity constraints. Furthermore, we utilize critical non-textual clues such as time between two consecutive posts and people mentions within the posts. To ascertain the effectiveness of the proposed method, we have collected data from public Slack conversations and Fresco, a proprietary platform deployed inside our organization. Experiments demonstrate that the proposed method yields an absolute (relative) improvement of as high as 3.23% (11.25%). To facilitate future research, we are releasing manual annotations for segmentation on public Slack conversations.", "pdf_parse": { "paper_id": "I17-1020", "_pdf_hash": "", "abstract": [ { "text": "We present a novel technique for segmenting chat conversations using the information bottleneck method (Tishby et al., 2000), augmented with sequential continuity constraints. Furthermore, we utilize critical non-textual clues such as time between two consecutive posts and people mentions within the posts. To ascertain the effectiveness of the proposed method, we have collected data from public Slack conversations and Fresco, a proprietary platform deployed inside our organization. Experiments demonstrate that the proposed method yields an absolute (relative) improvement of as high as 3.23% (11.25%). To facilitate future research, we are releasing manual annotations for segmentation on public Slack conversations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The prolific upsurge in the amount of chat conversations has notably influenced the way people wield languages for conversations. Moreover, conversation platforms have now become prevalent for both personal and professional usage. For instance, in a large enterprise scenario, project managers can utilize these platforms for various tasks such as decision auditing and dynamic responsibility allocation (Joty et al., 2013) . Logs of such conversations offer potentially valuable information for various other applications such as automatic assessment of possible collaborative work among people (Rebedea et al., 2011) .", "cite_spans": [ { "start": 404, "end": 423, "text": "(Joty et al., 2013)", "ref_id": "BIBREF19" }, { "start": 596, "end": 618, "text": "(Rebedea et al., 2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "indicates that both authors contributed equally. $ indicates that the author was at TCS Research New-Delhi during the course of this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is thus vital to invent effective segmentation methods that can seperate discussions into small granules of independent conversational snippets. By 'independent', we meant a segment should as much as possible be self-contained and discussing the same topic, such that a segment can be suggested if any similar conversation occurs again. As an outcome of this, various short text similarity methods can be employed directly. Segmentation can also potentially act as an empowering preprocessing step for various down-streaming tasks such as automatic summarization (Dias et al., 2007) , text generation (Barzilay and Lee, 2004) , information extraction (Allan, 2012) , and conversation visualization (Liu et al., 2012) . It is worth noting that chat segmentation presents a number of gruelling challenges such as, the informal nature of the text, the frequently short length of the posts and a significant proportion of irrelevant interspersed text (Schmidt and Stone).", "cite_spans": [ { "start": 566, "end": 585, "text": "(Dias et al., 2007)", "ref_id": "BIBREF8" }, { "start": 604, "end": 628, "text": "(Barzilay and Lee, 2004)", "ref_id": "BIBREF3" }, { "start": 654, "end": 667, "text": "(Allan, 2012)", "ref_id": "BIBREF1" }, { "start": 701, "end": 719, "text": "(Liu et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research in text segmentation has a long history going back to the earliest attempt of Kozima (1993) . Since then many methods, including but not limited to, Texttiling (Hearst, 1997), Choi's segmentation (Choi, 2000) , representation learning based on semantic embeddings (Alemi and Ginsparg, 2015), and topic models (Du et al., 2015a) have been presented. Albeit, very little research effort has been proposed for segmenting informal chat text. For instance, Schmidt and Stone have attempted to highlight the challenges with chat text segmentation, though they have not presented any algorithm specific to chat text.", "cite_spans": [ { "start": 87, "end": 100, "text": "Kozima (1993)", "ref_id": "BIBREF22" }, { "start": 205, "end": 217, "text": "(Choi, 2000)", "ref_id": "BIBREF7" }, { "start": 318, "end": 336, "text": "(Du et al., 2015a)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Information Bottleneck (IB) method has been successfully applied to clustering in the NLP domain (Slonim and Tishby, 2000) . Specifically, IB attempts to balance the trade-off between accuracy and compression (or complexity) while clustering the target variable, given a joint probability distribution between the target variable and an ob-served relevant variable. Similar to clustering, this paper interprets the task of text segmentation as a compression task with a constraint that allows only contiguous text snippets to be in a group.", "cite_spans": [ { "start": 101, "end": 126, "text": "(Slonim and Tishby, 2000)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The focus of this paper is to develop text segmentation methods for chat text utilizing the IB framework. In the process, this paper makes the following major contributions: (i) We introduce an IB inspired objective function for the task of text segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(ii) We develop an agglomerative algorithm to optimize the proposed objective function that also respects the necessary sequential continuity constraint for text segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(iii) To the best of our knowledge, this paper is a first attempt that addresses segmentation for chat text and incorporates non-textual clues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(iv) We have created a chat text segmentation dataset and releasing it for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows: we present a review of related literature in Section 2. Then, we formulate the text segmentation problem and define necessary notations in Section 3. Following this, we explain the proposed methodology in Section 4. Section 5 presents experiments and provides details on the dataset, experimental set-up, baselines, results, and effect of parameters. Finally, conclusions and potential directions for future work are outlined in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The IB method was originally introduced as a generalization of rate distortion theory which balances the tradeoff between the preservation of information about a relevance variable and the distortion of the target variable. Later on, similar to this work, a greedy bottom-up (agglomerative) IB based approach Tishby, 1999, 2000) has been successfully applied to NLP tasks such as document clustering.", "cite_spans": [ { "start": 309, "end": 328, "text": "Tishby, 1999, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Furthermore, the IB method has been widely studied for multiple machine learning tasks, including but not limited to, speech diarization (Vijayasenan et al., 2009) , image segmentation (Bardera et al., 2009) , image clustering (Gordon et al., 2003) , and visualization (Kamimura, 2010) . Particularly, similar to this paper, image segmentation has considered segmentation as the compres-sion part of the IB based method. But, image segmentation does not involve continuity constraints as their application can abolish the exploitation of similarity within the image. Yet another similar attempt that utilizes information theoretic terms as an objective (only the first term of the IB approach) has been made for the task of text segmentation and alignment (Sun et al., 2006) .", "cite_spans": [ { "start": 137, "end": 163, "text": "(Vijayasenan et al., 2009)", "ref_id": "BIBREF39" }, { "start": 185, "end": 207, "text": "(Bardera et al., 2009)", "ref_id": "BIBREF2" }, { "start": 227, "end": 248, "text": "(Gordon et al., 2003)", "ref_id": "BIBREF16" }, { "start": 269, "end": 285, "text": "(Kamimura, 2010)", "ref_id": "BIBREF20" }, { "start": 756, "end": 774, "text": "(Sun et al., 2006)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Broadly stating, a typical text segmentation method comprises of a method that: (a) consumes text representations for every independent text snippet, and (b) applies a search procedure for segmentation boundaries while optimizing objectives for segmentation. Here, we review literature of text segmentation by organizing them into 3 categories based on their focus: Category1 -(a), Cat-egory2 -(b), and Category3 -both (a) and (b).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Category1 approaches utilize or benefit from a great amount of effort put in developing robust topic models that can model discourse in natural language texts (Brants et al., 2002) . Recently, Du et al. (2013 Du et al. ( , 2015b have proposed a hierarchical Bayesian model for unsupervised topic segmentation that integrates a point-wise boundary sampling algorithm used in Bayesian segmentation into a structured (ordering-based) topic model. For a more comprehensive view of classical work on topic models for text segmentation, we refer to Misra et al. (2009) ; Riedl and Biemann (2012) . This work does not explore topic models and is left as a direction for future research.", "cite_spans": [ { "start": 159, "end": 180, "text": "(Brants et al., 2002)", "ref_id": "BIBREF6" }, { "start": 193, "end": 208, "text": "Du et al. (2013", "ref_id": "BIBREF10" }, { "start": 209, "end": 228, "text": "Du et al. ( , 2015b", "ref_id": "BIBREF11" }, { "start": 543, "end": 562, "text": "Misra et al. (2009)", "ref_id": "BIBREF25" }, { "start": 565, "end": 589, "text": "Riedl and Biemann (2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Category2 approaches comprise of different search procedures proposed for the task of text segmentation, including but not limited to, divisive hierarchical clustering (Choi, 2000) , dynamic programming (Kehagias et al., 2003) , and graph based clustering (Pourvali and Abadeh, 2012; Glavas et al., 2016; Utiyama and Isahara, 2001 ). This work proposes an agglomerative IB based hierarchical clustering algorithm -an addition to the arsenal of the approaches that falls in this category.", "cite_spans": [ { "start": 168, "end": 180, "text": "(Choi, 2000)", "ref_id": "BIBREF7" }, { "start": 203, "end": 226, "text": "(Kehagias et al., 2003)", "ref_id": "BIBREF21" }, { "start": 256, "end": 283, "text": "(Pourvali and Abadeh, 2012;", "ref_id": "BIBREF27" }, { "start": 284, "end": 304, "text": "Glavas et al., 2016;", "ref_id": "BIBREF15" }, { "start": 305, "end": 330, "text": "Utiyama and Isahara, 2001", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Similar to the proposed method, Category3 cuts across both of the above introduced dimensions of segmentation. Alemi and Ginsparg (2015) have proposed the use of semantic word embeddings and a relaxed dynamic programming procedure. We have also argued to utilize chat clues and introduced an IB based approach augmented with sequential continuity constraints. Yet another similar attempt has been made by Joty et al. (2013) in which they use topical and conversa-tional clues and introduce an unsupervised random walk model for the task of text segmentation.", "cite_spans": [ { "start": 405, "end": 423, "text": "Joty et al. (2013)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Beyond the above mentioned categorization, a significant amount of research effort has been put up in studying the evaluation metric for text segmentation (Pevzner and Hearst, 2002; Scaiano and Inkpen, 2012) . Here, we make use of the classical and most widely utilized metric introduced by Beeferman et al. (1999) . Also, there have been attempts to track topic boundaries for thread discussions (Zhu et al., 2008; Wang et al., 2008) . While these methods look similar to the proposed method, they differ as they attempt to recover thread structure with respect to the topic level view of the discussions within a thread community.", "cite_spans": [ { "start": 155, "end": 181, "text": "(Pevzner and Hearst, 2002;", "ref_id": "BIBREF26" }, { "start": 182, "end": 207, "text": "Scaiano and Inkpen, 2012)", "ref_id": "BIBREF30" }, { "start": 291, "end": 314, "text": "Beeferman et al. (1999)", "ref_id": "BIBREF4" }, { "start": 397, "end": 415, "text": "(Zhu et al., 2008;", "ref_id": "BIBREF41" }, { "start": 416, "end": 434, "text": "Wang et al., 2008)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The most similar direction of research to this work is on conversation trees (Louis and Cohen, 2015) and disentangling chat conversations (Elsner and Charniak, 2010). Both of these directions cluster independent posts leading to topic labelling and segmentation of these posts simultaneously. It is important to note that these methods do not have a sequential continuity constraint and consider lexical similarity even between long distant posts (Elsner and Charniak, 2011) . Moreover, if these methods are applied only for segmentation then they are very likely to produce segments with relatively very smaller durations; as reflected in the ground truth annotations of correspondingly released dataset (Elsner and Charniak, 2008) . It is worth noting that Elsner and Charniak (2010) have also advocated to utilize time gap and people mentions similar to the proposed method of this work.", "cite_spans": [ { "start": 77, "end": 100, "text": "(Louis and Cohen, 2015)", "ref_id": "BIBREF24" }, { "start": 447, "end": 474, "text": "(Elsner and Charniak, 2011)", "ref_id": "BIBREF14" }, { "start": 705, "end": 732, "text": "(Elsner and Charniak, 2008)", "ref_id": "BIBREF12" }, { "start": 759, "end": 785, "text": "Elsner and Charniak (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Let C be an input chat text sequence C = {c 1 , ..., c i , ..., c |t| } of length |C|, where c i is a text snippet such as a sentence or a post from chat text. In a chat scenario, text post c i will have a corresponding time-stamp c t i . A segment or a subsequence can be represented as C a:b = {c a , ..., c b }. A segmentation of C is defined as a segment sequence S = {s 1 , ..., s p }, where s j = C a j :b j and b j + 1 = a j+1 . Given an input text sequence C, the segmentation is defined as the task of finding the most probable segment sequence S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Description And Notations", "sec_num": "3" }, { "text": "This section firstly presents the proposed IB inspired method for text segmentation that conforms to the necessary constraint of sequential continu-ity, in Section 4.1. Next, in Section 4.2, the proposed IB inspired method is augmented to incorporate important non-textual clues that arise in a chat scenario. More specifically, the time between two consecutive posts and people mentions within the posts are integrated into the proposed IB inspired approach for the text segmentation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methodology", "sec_num": "4" }, { "text": "The IB introduces a set of relevance variables R which encapsulate meaningful information about C while compressing the data points (Slonim and Tishby, 2000) . Similarly, we propose that a segment sequence S should also contain as much information as possible about R (i.e., maximize I(R, S)), constrained by mutual information between S and C (i.e., minimize I(S, C)). Here, C is a chat text sequence, following the notation introduced in the previous section. The IB objective can be achieved by maximizing the following:", "cite_spans": [ { "start": 132, "end": 157, "text": "(Slonim and Tishby, 2000)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F = I(R, S) \u2212 1 \u03b2 \u00d7 I(S, C)", "eq_num": "(1)" } ], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "In other words, the above IB objective function attempts to balance a trade-off between the most informative segmentation of R and the most compact representation of C; where \u03b2 is a constant parameter to control the relative importance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "Similar to , we model R as word clusters and optimize F in an agglomerative fashion, as explained in Algorithm 1. In simple words, the maximization of F boils down to agglomeratively merging an adjacent pair of posts that correspond to least value of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "d. In Algorithm 1, p(s) is equal to p(s i ) + p(s i+1 ) and d(s i , s i+1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "is computed using the following definition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d(s i , s i+1 ) = JSD[p(R|s i ), p(R|s i+1 )]\u2212 1 \u03b2 \u00d7 JSD[p(C|s i ), p(C|s i+1 )]", "eq_num": "(2)" } ], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "Here, JSD indicates Jensen-Shannon-Divergence. The computation of R and p(R, C) is explained later in Section 5.2. Stopping criterion for Algorithm 1 is SC > \u03b8, where SC is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SC = I(R, S) I(R, C)", "eq_num": "(3)" } ], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "The value of SC is expected to decrease due to a relatively large dip in the value of I(R, S) when Algorithm 1: IB inspired text segmentation Input : Joint distribution: p(R, C), Tradeoff parameter: \u03b2 Output : Segmentation sequence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "S Initialization: S \u2190 C Calculate \u2206F (s i , s i+1 ) = p(s)\u00d7 d(s i , s i+1 ) \u2200 s i \u2208 S 1 while Stopping criterion is false do 2 {i} = argmin i \u2206F (s i , s i +1 ); 3 Merge {s i , s i+1 } \u21d2 s \u2208 S; 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "Update \u2206F (s, s i\u22121 ) and \u2206F (s, s i+2 );", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "5 end more dissimilar clusters are merged. Therefore, SC provides strong clues to terminate the proposed IB approach. The inspiration behind this specific computation of SC has come from the fact that it has produced stable results when experimented with a similar task of speaker diarization (Vijayasenan et al., 2009) . The value of \u03b8 is tuned by optimizing the performance over a validation dataset just like other hyper-parameters.", "cite_spans": [ { "start": 293, "end": 319, "text": "(Vijayasenan et al., 2009)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "The IB inspired text segmentation algorithm (Algorithm 1) respects the sequential continuity constraint, as it considers merging only adjacent pairs (see step 2, 3, and 4 of Algorithm 1) while optimizing F ; unlike the agglomerative IB clustering (Slonim and Tishby, 2000) . As a result of this, the proposed IB based approach requires a limited number of involved computations, more precisely, linear in terms of number of text snippets.", "cite_spans": [ { "start": 247, "end": 272, "text": "(Slonim and Tishby, 2000)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "IB Inspired Text Segmentation Algorithm", "sec_num": "4.1" }, { "text": "As mentioned above, we submit that non-textual clues (such as time between two consecutive posts and people mentions within the posts) are critical for segmenting chat text. To incorporate these two important clues, we augment Algorithm 1, developed in the previous section. More precisely, we modify d of Equation 2 to d as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d(s i , s i+1 ) = w 1 \u00d7 d(s i , s i+1 ) + w 2 \u00d7 (c t a i+1 \u2212 c t b i ) + w 3 \u00d7 ||s p i \u2212 s p i+1 ||", "eq_num": "(4)" } ], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "Here, c It is important to note that Algorithm 1 utilizes d of Equation 2 to represent textual dissimilarity between a pair of posts in order to achieve the optimal segment sequence S. Following the same intuition, d in Equation 4 measures weighted distances based not only on textual similarity but also based on information in time-stamps, posters and people mentioned. The intuition behind the second distance term in d is that if the time difference between two posts is small then they are likely to be in the same segment. Additionally, the third distance term in d is intended to merge segments that involve a higher number of common posters and people mentions. Following the same intuition, in addition to the changes in d, we modify the stopping criterion as well while the rest stays the same as in Algorithm 1. The stopping criterion is defined as SC > \u03b8, where SC is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SC = w 1 \u00d7 I(R, S) I(R, C) + w 2 \u00d7 (1 \u2212 G(S) G max ) + w 3 \u00d7 H(S) H max", "eq_num": "(5)" } ], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "Here, the G(S) and H(S) mentioned in Equation 5 are computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G(S) = s i \u2208S c t b i \u2212 c t a i (6) H(S) = |S| i=1 ||s p i \u2212 s p i+1 ||", "eq_num": "(7)" } ], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "The first term in SC in Equation 5 is taken from the stopping criterion of Algorithm 1 and the remaining second and third terms are similarly derived. Both the second and third terms decrease as the cardinality of S is decreased and reflect analogous behaviour to the two introduced important clues. The first term computes the fraction of information contained in S about R, normalized by the information contained in C about R; similarly, the second term computes the fraction of time duration between segments normalized by total duration of chat text sequence (i.e. 1 -fraction of durations of all segments normalized by total duration), and the third term computes the sum of inter segment distances in terms of poster information normalized by the maximum distance of similar terms (i.e. when each post is a segment).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Non-Textual Clues", "sec_num": "4.2" }, { "text": "This section starts with a description of the datasets collected from the real world conversation platforms in Subsection 5.1. Later in Subsection 5.2, we explain the evaluation metric utilized in our experiments. Subsection 5.3 describes the meaningful baselines developed for a fair comparison with the proposed IB approach. Next in Subsections 5.4 and 5.5, we discuss the performance accomplished by the proposed approach on both of the collected datasets. Lastly, we analyse the stability of the proposed IB approach with respect to parameters \u03b2 and \u03b8 in Subsection 5.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We have collected chat text datasets, namely, Slack and Fresco, respectively from http://slackarchive.io/ and http://talk.fresco.me/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "After that, we have manually annotated them for the text segmentation task. We have utilized the annotations done by 3 workers with problematic cases resolved by consensus. Datasets' statistics is mentioned in Table 1 . The collected raw data was in the form of threads, which was later divided into segments. Further, we have created multiple documents where each document contains N continuous segments from the original threads. N was selected randomly between 5 and 15. 60% of these documents were used for tuning hyperparameters which include weights (w 1 , w 2 , w 3 ), \u03b8, and \u03b2; and the remaining were used for testing.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "A small portion of one of the documents from the Slack dataset is depicted in Figure 1(a) . Here, manual annotations are marked by a bold black horizontal line, and also enumerated as 1), 2), and 3). Every text line is a post made by one of the users on the Slack platform during conversations. As mentioned above, in a chat scenario, every post has following three integral components: 1) poster (indicated by corresponding identity in Figure 1 , from beginning till '-=[*says'), 2) time-stamp (between '-=[*' and '*]=-)', and 3) textual content (after '*]=-::: 'till end). One must also notice that some of the posts also have people mentions within the posts (indicated as '<@USERID>' in Figure 1) .", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 89, "text": "Figure 1(a)", "ref_id": "FIGREF1" }, { "start": 437, "end": 445, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 691, "end": 700, "text": "Figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "To validate the differences between the collected chat datasets and traditional datasets such as Choi's dataset (Choi, 2000) , we computed the fraction of words occurring with a frequency less than a given word frequency, as shown in Figure 2 . It is clearly evident from the Figure 2 that chat segmentation datasets have a significantly high proportion of less frequent words in comparison to the traditional text segmentation datasets. The presence of large infrequent words makes it hard for textual similarity methods to succeed as it will increase the proportion of out of vocabulary words (Gulcehre et al., 2016) . Therefore, it becomes even more critical to utilize the non-textual clues while processing chat text.", "cite_spans": [ { "start": 595, "end": 618, "text": "(Gulcehre et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 112, "end": 124, "text": "(Choi, 2000)", "ref_id": "FIGREF2" }, { "start": 234, "end": 242, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 276, "end": 284, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "For performance evaluation, we have employed P k metric (Beeferman et al., 1999) which is widely utilized for evaluating the text segmentation task. A sliding window of fixed size k (usually half of the average of length of all the segments in the document) slides over the entire document from top to bottom. Both inter and intra segment errors for all posts k apart is calculated by comparing inferred and annotated boundaries.", "cite_spans": [ { "start": 56, "end": 80, "text": "(Beeferman et al., 1999)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Setup", "sec_num": "5.2" }, { "text": "We model the set of relevance variables R as word clusters estimated by utilizing agglomerative IB based document clustering (Slonim and Tishby, 2000) where posts are treated as relevance variables. Consequently, R comprises of informative word clusters about posts. Thus, each entry p(r i , c j ) in matrix p(R, C) represents the joint probability of getting a word cluster r i in post c j . We calculate p(r i , c j ) simply by counting the common words in r i and c j and then normalizing.", "cite_spans": [ { "start": 125, "end": 150, "text": "(Slonim and Tishby, 2000)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Setup", "sec_num": "5.2" }, { "text": "For comparisons, we have developed multiple baselines. In Random, 5 to 15 boundaries are inserted randomly. In case of No Boundary, the entire document is labelled as one segment. Next, we implemented C-99 and Dynamic Programming, which are classical benchmarks for the text segmentation task. Another very simple and yet effec- w 1 = 1, w 2 = 0, w 3 = 0 33 42 TimeDiff w 1 = 0, w 2 = 1, w 3 = 0 26.75 34.25 Poster w 1 = 0, w 2 = 0, w 3 = 1 34.52 41.50 Text + TimeDiff \u2200w \u2208 {w 1 , w 2 }, w \u2208 (0, 1); w 3 = 0; w 1 + w 2 = 1 26.47 34.68 Text + Poster \u2200w \u2208 {w 1 , w 3 }, w \u2208 (0, 1); w 2 = 0; w 1 + w 3 = 1 28.57 38.21 Text+TimeDiff+Poster \u2200w \u2208 {w 1 , w 2 , w 3 }, w \u2208 (0, 1); w 1 + w 2 + w 3 = 1 25.47 34.80 tive baseline Average Time is prepared, in which boundaries are inserted after a fixed amount of time has elapsed. Fixed time is calculated from a certain separate portion of our annotated dataset. Next baseline utilized in our experiments is Encoder-Decoder Distance. In this approach, we have trained a sequence to sequence RNN encoder-decoder (Sutskever et al., 2014) utilizing 1.5 million posts from the publicly available Slack dataset excluding the labelled portion. The network comprises of 2 hidden layers and the hidden state dimension was set to 256 for each. The encoded representation was utilized and greedily merged in an agglomerative fashion using Euclidean distance. The stopping criterion for this approach was similar to the third term in Equation 5 corresponding to poster information. The optimization of hidden state dimension was computationally demanding hence left for further exploration in future. Similar to Encoder-Decoder Distance, we have developed LDA Distance where representations have come from a topic model (Blei et al., 2003) having 100 topics.", "cite_spans": [ { "start": 1051, "end": 1075, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF36" }, { "start": 1749, "end": 1768, "text": "(Blei et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Approaches", "sec_num": "5.3" }, { "text": "The results for all prepared baselines and variants of IB on both Slack and Fresco datasets are mentioned in Table 2 . For both Slack and Fresco datasets, multiple variants of IB yield superior performance when compared against all the developed baselines. More precisely, for Slack dataset, 4 different variants of the proposed IB based method achieve higher performance with an absolute improvement of as high as 3.23% and a relative improvement of 11.25%, when compared against the baselines. In case of Fresco dataset, 3 different variants of the proposed method achieve superior performance but not as significantly in terms of absolute P k value, as they do for the Slack dataset. We hypothesize that such a behaviour is potentially because of the lesser value of posts per segment for Fresco (5000/800=6.25) in comparison to Slack (9000/900=10). Also, note that just the time clue in IB framework performs best on Fresco dataset indicating that the relative importance of time clue will be higher for a dataset with smaller lengths of segments (i.e. low value of posts per segment). To validate our hypothesize further, we estimate the normalized frequency distribution of segment length (number of posts per segment) for both datasets, as shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1256, "end": 1264, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Quantitative Results", "sec_num": "5.4" }, { "text": "It is worth noting that the obtained empirical results support the major hypothesis of this work. As variants of IB yield superior performance on both the datasets. Also, on incorporation of individual non-textual clues, superior improvements of 3.23% and 7.32% are observed from Text to Text+TimeDiff for Slack and Fresco, respectively; similarly, from Text to Text+Poster improvements of 4.43% and 3.79% are observed for Slack and Fresco, respectively. Further, the best performance is achieved for both the datasets on fusing both the non-textual clues indicating that clues are complementary as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Results", "sec_num": "5.4" }, { "text": "Results obtained for multiple approaches, namely, Average Time, IB:TimeDiff, and IB:Text+TimeDiff+Poster, corresponding to a small portion of chat text placed in part (a) of Figure 1 are presented in part (b) of Figure 1 . Average Time baseline (indicated by purple) managed to find three boundaries, albeit one of the boundary is significantly off, potentially due to the constraint of fixed time duration.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 212, "end": 220, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Qualitative Results", "sec_num": "5.5" }, { "text": "Similarly, the next IB:TimeDiff approach also manages to find first two boundaries correctly but fails to recover the third boundary. Results seem to indicate that the time clue is not very effective to reconstruct segmentation boundaries when segment length varies a lot within the document. Interestingly, the combination of all three clues as happens in the IB:Text+TimeDiff+Poster approach, yielded the best results as all of three segmentation boundaries in ground truth are recovered with high precision. Therefore, we submit that the incorporation of non-textual clues is critical to achieve superior results to segment chat text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Results", "sec_num": "5.5" }, { "text": "To analyse the behaviour of the proposed IB based methods, we compute the average performance metric P k of IB:Text with respect to \u03b2 and \u03b8, over the test set of Slack dataset. Also, to facilitate the reproduction of results, we mention optimal values of all the parameters for all the variants of the proposed IB approach in Table 5 .5. Figure 4 shows the behaviour of the average of performance evaluation metric P k over the test set of Slack dataset with respect to hyper-parameter \u03b2. As mentioned above also, the parameter \u03b2 represents a trade-off between the preserved amount of information and the level of compression. It is clearly observable that the optimal value of \u03b2 does not lie on extremes indicating the importance of both the terms (as in Equation 1) of the proposed IB method. The coefficient of the second term (i.e. 1 \u03b2 equals to 10 \u22123 ) is smaller. One could interpret the behaviour of thr second term as a regularization term because 1 \u03b2 controls the complexity of the learnt segment sequence S. Furthermore, optimal values in Table 5 .5 for variants with fusion of two or more clues indicate complementary and relative importance of the studied non-textual clues.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 5", "ref_id": null }, { "start": 338, "end": 346, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 1049, "end": 1056, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Effect Of Parameters", "sec_num": "5.6" }, { "text": "The average performance evaluation metric P k over test set of the Slack dataset with respect to hyper-parameter \u03b8 is depicted in Figure 5 . Figure 5 makes the appropriateness of the stopping criterion clearly evident. Initially, the average of P k value decreases as more coherent posts are merged and continues to decrease till it is less than a particular value of \u03b8. After that, the average of P k value starts increasing potentially due to the merging of more dissimilar segments. The optimal values of \u03b8 varies significantly from one variant to another requiring a mandatory tuning over the validation dataset, as mentioned in Table 5 .5, for all IB variants proposed in this work.", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 141, "end": 150, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 634, "end": 641, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Effect Of Parameters", "sec_num": "5.6" }, { "text": "We started by highlighting the increasing importance of efficient methods to process chat text, in particular for text segmentation. We have collected and introduced datasets for the same. Our introduction of chat text datasets has enabled us to explore segmentation approaches that are specific to chat text. Further, our results demonstrate that the proposed IB method yields an absolute improvement of as high as 3.23%. Also, a significant boost (3.79%-7.32%) in performance is observed on incorporation of non-textual clues indicating their criticality. In future, it will be interesting to investigate the possibility of incorporat-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion And Future Work", "sec_num": "6" } ], "back_matter": [ { "text": "IB Variants: Table 3 : Optimal values of parameters corresponding to results obtained by IB variants in Table 2. ing semantic word embeddings in the proposed IB method (Alemi and Ginsparg, 2015).", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 3", "ref_id": null }, { "start": 104, "end": 112, "text": "Table 2.", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Text segmentation based on semantic word embeddings", "authors": [ { "first": "A", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Alemi", "suffix": "" }, { "first": "", "middle": [], "last": "Ginsparg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.05543" ] }, "num": null, "urls": [], "raw_text": "Alexander A Alemi and Paul Ginsparg. 2015. Text segmentation based on semantic word embeddings. arXiv preprint arXiv:1503.05543.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Topic detection and tracking: event-based information organization", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2012, "venue": "", "volume": "12", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan. 2012. Topic detection and tracking: event-based information organization, volume 12. Springer Science & Business Media.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Image segmentation using information bottleneck method", "authors": [ { "first": "Anton", "middle": [], "last": "Bardera", "suffix": "" }, { "first": "Jaume", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Imma", "middle": [], "last": "Boada", "suffix": "" }, { "first": "Miquel", "middle": [], "last": "Feixas", "suffix": "" }, { "first": "Mateu", "middle": [], "last": "Sbert", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Image Processing", "volume": "18", "issue": "7", "pages": "1601--1612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anton Bardera, Jaume Rigau, Imma Boada, Miquel Feixas, and Mateu Sbert. 2009. Image segmentation using information bottleneck method. IEEE Trans- actions on Image Processing, 18(7):1601-1612.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Catching the drift: Probabilistic content models, with applications to generation and summarization", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. arXiv preprint arXiv: 0405039.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical models for text segmentation. Machine Learning", "authors": [ { "first": "Doug", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1999, "venue": "", "volume": "34", "issue": "", "pages": "177--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine Learning, 34(1-3):177-210.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Latent dirichlet allocation", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2003, "venue": "Journal of machine Learning research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Topic-based document segmentation with probabilistic latent semantic analysis", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Eleventh International Conference on Information and Knowledge Management, CIKM '02", "volume": "", "issue": "", "pages": "211--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In Pro- ceedings of the Eleventh International Conference on Information and Knowledge Management, CIKM '02, pages 211-218.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Advances in domain independent linear text segmentation", "authors": [ { "first": "Y", "middle": [ "Y" ], "last": "Freddy", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference", "volume": "", "issue": "", "pages": "26--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freddy Y. Y. Choi. 2000. Advances in domain indepen- dent linear text segmentation. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, pages 26- 33.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Topic segmentation algorithms for text summarization and passage retrieval: An exhaustive evaluation", "authors": [ { "first": "Ga\u00ebl", "middle": [], "last": "Dias", "suffix": "" }, { "first": "Elsa", "middle": [], "last": "Alves", "suffix": "" }, { "first": "Jos\u00e9 Gabriel Pereira", "middle": [], "last": "Lopes", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 22Nd National Conference on Artificial Intelligence", "volume": "2", "issue": "", "pages": "1334--1339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ga\u00ebl Dias, Elsa Alves, and Jos\u00e9 Gabriel Pereira Lopes. 2007. Topic segmentation algorithms for text sum- marization and passage retrieval: An exhaustive evaluation. In Proceedings of the 22Nd National Conference on Artificial Intelligence -Volume 2, AAAI'07, pages 1334-1339. AAAI Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Topic segmentation with a structured topic model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "L", "middle": [], "last": "Wray", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "HLT-NAACL. The Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, Wray L. Buntine, and Mark Johnson. 2015a. Topic segmentation with a structured topic model. In HLT-NAACL. The Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Topic segmentation with a structured topic model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "K", "middle": [], "last": "John", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Pate", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics Conference", "volume": "", "issue": "", "pages": "190--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, John K Pate, and Mark Johnson. 2013. Topic segmentation with a structured topic model. In Pro- ceedings of the North American Chapter of the As- sociation for Computational Linguistics Conference, pages 190-200.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topic segmentation with an ordering-based topic model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "K", "middle": [], "last": "John", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Pate", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2232--2238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, John K Pate, and Mark Johnson. 2015b. Topic segmentation with an ordering-based topic model. In Proceedings of the Twenty-Ninth AAAI Confer- ence on Artificial Intelligence, pages 2232-2238.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "You talking to me? a corpus and algorithm for conversation disentanglement", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "834--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner and Eugene Charniak. 2008. You talk- ing to me? a corpus and algorithm for conversation disentanglement. pages 834-842.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Disentangling chat", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2010, "venue": "Computation Linguistics", "volume": "36", "issue": "", "pages": "389--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner and Eugene Charniak. 2010. Disentan- gling chat. Computation Linguistics, 36:389-409.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Disentangling chat with local coherence models", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1179--1189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner and Eugene Charniak. 2011. Disentan- gling chat with local coherence models. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, pages 1179-1189.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised text segmentation using semantic relatedness graphs", "authors": [ { "first": "Goran", "middle": [], "last": "Glavas", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Nanni", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics (*SEM 2016)", "volume": "", "issue": "", "pages": "125--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glavas, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Computa- tional Semantics (*SEM 2016), pages 125-130.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Applying the information bottleneck principle to unsupervised clustering of discrete and continuous image representations", "authors": [ { "first": "Shiri", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Hayit", "middle": [], "last": "Greenspan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Ninth IEEE International Conference on Computer Vision", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiri Gordon, Hayit Greenspan, and Jacob Goldberger. 2003. Applying the information bottleneck princi- ple to unsupervised clustering of discrete and con- tinuous image representations. In Proceedings of the Ninth IEEE International Conference on Computer Vision -Volume 2, ICCV '03, pages 370-, Washing- ton, DC, USA. IEEE Computer Society.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Pointing the unknown words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.08148" ] }, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallap- ati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Texttiling: Segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational linguistics, 23(1):33-64.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Topic segmentation and labeling in asynchronous conversations", "authors": [ { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" }, { "first": "Raymond T", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Journal of Artificial Intelligence Research", "volume": "47", "issue": "", "pages": "521--573", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T Ng. 2013. Topic segmentation and labeling in asyn- chronous conversations. Journal of Artificial Intelli- gence Research, 47:521-573.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Information-theoretic enhancement learning and its application to visualization of self-organizing maps", "authors": [ { "first": "Ryotaro", "middle": [], "last": "Kamimura", "suffix": "" } ], "year": 2010, "venue": "Neurocomputing", "volume": "73", "issue": "13", "pages": "2642--2664", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryotaro Kamimura. 2010. Information-theoretic en- hancement learning and its application to visual- ization of self-organizing maps. Neurocomputing, 73(13):2642-2664.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Linear text segmentation using a dynamic programming algorithm", "authors": [ { "first": "Athanasios", "middle": [], "last": "Kehagias", "suffix": "" }, { "first": "Fragkou", "middle": [], "last": "Pavlina", "suffix": "" }, { "first": "Vassilios", "middle": [], "last": "Petridis", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "171--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Athanasios Kehagias, Fragkou Pavlina, and Vassilios Petridis. 2003. Linear text segmentation using a dy- namic programming algorithm. In Proceedings of the tenth conference on European chapter of the As- sociation for Computational Linguistics-Volume 1, pages 171-178. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Text segmentation based on similarity between words", "authors": [ { "first": "Hideki", "middle": [], "last": "Kozima", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting on Association for Computational Linguistics, ACL '93", "volume": "", "issue": "", "pages": "286--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Kozima. 1993. Text segmentation based on similarity between words. In Proceedings of the 31st Annual Meeting on Association for Computa- tional Linguistics, ACL '93, pages 286-288.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Tiara: Interactive, topic-based visual text summarization and analysis", "authors": [ { "first": "Shixia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Michelle", "middle": [ "X" ], "last": "Zhou", "suffix": "" }, { "first": "Shimei", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Weihong", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Weijia", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xiaoxiao", "middle": [], "last": "Lian", "suffix": "" } ], "year": 2012, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "3", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shixia Liu, Michelle X Zhou, Shimei Pan, Yangqiu Song, Weihong Qian, Weijia Cai, and Xiaoxiao Lian. 2012. Tiara: Interactive, topic-based visual text summarization and analysis. ACM Transac- tions on Intelligent Systems and Technology (TIST), 3(2):25.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Conversation trees: A grammar model for topic structure in forums", "authors": [ { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Empirical Methods on Natural Language Processing", "volume": "", "issue": "", "pages": "1543--1553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annie Louis and Shay B. Cohen. 2015. Conversa- tion trees: A grammar model for topic structure in forums. In Proceedings of the Empirical Methods on Natural Language Processing, pages 1543-1553. The Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Text segmentation via topic modeling: An analytical study", "authors": [ { "first": "Hemant", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "Joemon", "middle": [ "M" ], "last": "Jose", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Cappe", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09", "volume": "", "issue": "", "pages": "1553--1556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hemant Misra, Fran\u00e7ois Yvon, Joemon M. Jose, and Olivier Cappe. 2009. Text segmentation via topic modeling: An analytical study. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09, pages 1553- 1556.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A critique and improvement of an evaluation metric for text segmentation", "authors": [ { "first": "Lev", "middle": [], "last": "Pevzner", "suffix": "" }, { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "", "pages": "19--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Pevzner and Marti A Hearst. 2002. A critique and improvement of an evaluation metric for text seg- mentation. Computational Linguistics, 28:19-36.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A new graph based text segmentation using wikipedia for automatic text summarization", "authors": [ { "first": "Mohsen", "middle": [], "last": "Pourvali", "suffix": "" }, { "first": "Ph D Mohammad Saniee", "middle": [], "last": "Abadeh", "suffix": "" } ], "year": 2012, "venue": "Editorial Preface", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohsen Pourvali and Ph D Mohammad Saniee Abadeh. 2012. A new graph based text segmenta- tion using wikipedia for automatic text summariza- tion. Editorial Preface, 3(1).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Automatic assessment of collaborative chat conversations with polycafe", "authors": [ { "first": "Traian", "middle": [], "last": "Rebedea", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Dascalu", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Trausan-Matu", "suffix": "" }, { "first": "Gillian", "middle": [], "last": "Armitt", "suffix": "" }, { "first": "Costin", "middle": [], "last": "Chiru", "suffix": "" } ], "year": 2011, "venue": "European Conference on Technology Enhanced Learning", "volume": "", "issue": "", "pages": "299--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Traian Rebedea, Mihai Dascalu, Stefan Trausan-Matu, Gillian Armitt, and Costin Chiru. 2011. Automatic assessment of collaborative chat conversations with polycafe. In European Conference on Technology Enhanced Learning, pages 299-312. Springer.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Text segmentation with topic models", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2012, "venue": "Journal for Language Technology and Computational Linguistics", "volume": "27", "issue": "", "pages": "47--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Riedl and Chris Biemann. 2012. Text segmen- tation with topic models. In Journal for Language Technology and Computational Linguistics, volume 27.1, pages 47-69.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Getting more from segmentation evaluation", "authors": [ { "first": "Martin", "middle": [], "last": "Scaiano", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "362--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Scaiano and Diana Inkpen. 2012. Getting more from segmentation evaluation. In Proceedings of the 2012 conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 362-366. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Detection of topic change in irc chat logs", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "De- tection of topic change in irc chat logs. http://www.trevorstone.org/school/ircsegmentation.pdf.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Agglomerative information bottleneck", "authors": [ { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 1999, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "617--623", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Slonim and Naftali Tishby. 1999. Agglomera- tive information bottleneck. In Advances in Neural Information Processing Systems, pages 617-623.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Document clustering using word clusters via the information bottleneck method", "authors": [ { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "208--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Slonim and Naftali Tishby. 2000. Document clustering using word clusters via the information bottleneck method. In Proceedings of the 23rd an- nual international ACM SIGIR conference on Re- search and development in information retrieval, pages 208-215. ACM.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Multi-task text segmentation and alignment based on weighted mutual information", "authors": [ { "first": "Bingjun", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ding", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hongyuan", "middle": [], "last": "Zha", "suffix": "" }, { "first": "John", "middle": [], "last": "Yen", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 15th ACM International Conference on Information and Knowledge Management, CIKM '06", "volume": "", "issue": "", "pages": "846--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingjun Sun, Ding Zhou, Hongyuan Zha, and John Yen. 2006. Multi-task text segmentation and align- ment based on weighted mutual information. In Proceedings of the 15th ACM International Confer- ence on Information and Knowledge Management, CIKM '06, pages 846-847.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The information bottleneck method", "authors": [ { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "William", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Bialek", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naftali Tishby, Fernando C. N. Pereira, and William Bialek. 2000. The information bottleneck method. arXiv preprint arXiv: 0004057.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A statistical model for domain-independent text segmentation", "authors": [ { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01", "volume": "", "issue": "", "pages": "499--506", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masao Utiyama and Hitoshi Isahara. 2001. A statis- tical model for domain-independent text segmenta- tion. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01, pages 499-506.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An information theoretic approach to speaker diarization of meeting data", "authors": [ { "first": "Deepu", "middle": [], "last": "Vijayasenan", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Valente", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "Bourlard", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "17", "issue": "7", "pages": "1382--1393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deepu Vijayasenan, Fabio Valente, and Herv\u00e9 Bourlard. 2009. An information theoretic approach to speaker diarization of meeting data. IEEE Trans- actions on Audio, Speech, and Language Process- ing, 17(7):1382-1393.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Recovering implicit thread structure in newsgroup style conversations", "authors": [ { "first": "Yi-Chia", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mahesh", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "Carolyn", "middle": [ "Penstein" ], "last": "Ros", "suffix": "" } ], "year": 2008, "venue": "Proceedings of The International Conference on Weblogs and Social Media", "volume": "", "issue": "", "pages": "152--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi-Chia Wang, Mahesh Joshi, William W. Cohen, and Carolyn Penstein Ros. 2008. Recovering implicit thread structure in newsgroup style conversations. In Proceedings of The International Conference on Weblogs and Social Media, pages 152-160. The AAAI Press.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Topic detection and tracking for threaded discussion communities", "authors": [ { "first": "Mingliang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Weiming", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ou", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2008, "venue": "Web Intelligence and Intelligent Agent Technology", "volume": "1", "issue": "", "pages": "77--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingliang Zhu, Weiming Hu, and Ou Wu. 2008. Topic detection and tracking for threaded discussion com- munities. In Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT'08. IEEE/WIC/ACM In- ternational Conference on, volume 1, pages 77-83. IEEE.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "t a i+1 , c t b i and s p i represent time-stamp of the first post of segment s i+1 , time-stamp of last post of segment s i , and representation for poster information embedded in segment s i , respectively. The s p i representation is computed as a bag of posters counting all the people mentioned in the posts and posters themselves in a segment. w 1 , w 2 , w 3 are weights indicating the relative importance of distance terms computed for all three different clues. ||.|| in Equation 4 indicates euclidean norm.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "(a) Manually created ground truth for Slack public conversations. Black color lines represents segmentation boundaries. (b) Results obtained for multiple approaches. Text best read magnified.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Fraction of words less than a given word frequency.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Normalized frequency distribution of segment length for both the collected chat datasets.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Average evaluation metric P k over Slack dataset with respect to hyper-parameter \u03b2.", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "Average evaluation metric P k over Slack dataset with respect to hyper-parameter \u03b8.", "num": null }, "TABREF2": { "text": "", "type_str": "table", "num": null, "content": "", "html": null } } } }