{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:35.824975Z" }, "title": "NMF Ensembles? Not for Text Summarization!", "authors": [ { "first": "Alka", "middle": [], "last": "Khurana", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Delhi Delhi", "location": { "country": "India" } }, "email": "akhurana@cs.du.ac.in" }, { "first": "Vasudha", "middle": [], "last": "Bhatnagar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Delhi Delhi", "location": { "country": "India" } }, "email": "vbhatnagar@cs.du.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Non-negative Matrix Factorization (NMF) has been used for text analytics with promising results. Instability of results arising due to stochastic variations during initialization makes a case for use of ensemble technology. However, our extensive empirical investigation indicates otherwise. In this paper, we establish that ensemble summary for single document using NMF is no better than the best base model summary.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Non-negative Matrix Factorization (NMF) has been used for text analytics with promising results. Instability of results arising due to stochastic variations during initialization makes a case for use of ensemble technology. However, our extensive empirical investigation indicates otherwise. In this paper, we establish that ensemble summary for single document using NMF is no better than the best base model summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Non-negative Matrix factorization (NMF) has demonstrated promise in text analytic tasks like topic modeling (Suh et al., 2017; Qiang et al., 2018; Belford et al., 2018) , document summarization (Lee et al., 2009; Khurana and Bhatnagar, 2019) and document clustering (Shahnaz et al., 2006; Shinnou and Sasaki, 2007) . The method finds favour due to the presence of non-negative elements in resultant factor matrices, which enhance intuitive understanding of the underlying latent semantic structure of the text (Lee and Seung, 1999) .", "cite_spans": [ { "start": 108, "end": 126, "text": "(Suh et al., 2017;", "ref_id": "BIBREF14" }, { "start": 127, "end": 146, "text": "Qiang et al., 2018;", "ref_id": "BIBREF11" }, { "start": 147, "end": 168, "text": "Belford et al., 2018)", "ref_id": "BIBREF1" }, { "start": 194, "end": 212, "text": "(Lee et al., 2009;", "ref_id": "BIBREF8" }, { "start": 213, "end": 241, "text": "Khurana and Bhatnagar, 2019)", "ref_id": "BIBREF5" }, { "start": 266, "end": 288, "text": "(Shahnaz et al., 2006;", "ref_id": "BIBREF12" }, { "start": 289, "end": 314, "text": "Shinnou and Sasaki, 2007)", "ref_id": "BIBREF13" }, { "start": 510, "end": 531, "text": "(Lee and Seung, 1999)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent applications of ensemble methods for NMF based topic modeling has shown considerable promise (Suh et al., 2017; Qiang et al., 2018; Belford et al., 2018) . These observations drive our motivation for exploring NMF ensembles for document summarization task.", "cite_spans": [ { "start": 100, "end": 118, "text": "(Suh et al., 2017;", "ref_id": "BIBREF14" }, { "start": 119, "end": 138, "text": "Qiang et al., 2018;", "ref_id": "BIBREF11" }, { "start": 139, "end": 160, "text": "Belford et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider a (pre-processed) document D consisting of n sentences ( S 1 , S 2 , . . . , S n ) and m terms ( t 1 , t 2 , . . . , t m ) represented by a Boolean term-sentence matrix A m\u00d7n . NMF decomposition of A results into two non-negative factor matrices W and H, where W is m \u00d7 r term-topic (feature) matrix and H is r\u00d7n topic-sentence (co-efficient) matrix, with r min{m, n}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NMF for Text Summarization", "sec_num": "1.1" }, { "text": "Columns in W correspond to document topics represented as \u03c4 1 , \u03c4 2 . . . \u03c4 r in the latent semantic space, and columns in H represent sentences in D. Element w ij in W signifies the contribution of term t i in topic \u03c4 j , and element h ij in H denotes the strength of topic \u03c4 i in sentence S j . Deft manipulation of the elements of two factor matrices yields distinctive sentence scores (Lee et al., 2009; Khurana and Bhatnagar, 2019) . Top scoring sentences are selected to generate summary of desired length.", "cite_spans": [ { "start": 389, "end": 407, "text": "(Lee et al., 2009;", "ref_id": "BIBREF8" }, { "start": 408, "end": 436, "text": "Khurana and Bhatnagar, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "NMF for Text Summarization", "sec_num": "1.1" }, { "text": "Even though NMF based automatic text summarization is unsupervised and carries advantages of language, domain and collection independence, yet it has been used sporadically for summarization. The reason can be linked to stochastic variations in factor matrices due to random initialization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instability of NMF for Text Summarization:", "sec_num": null }, { "text": "Repeated NMF factorization of the input termsentence matrix results into different sentence scores, generating different summaries. This ambivalence renders the resulting NMF summaries dubitable. In authors' opinion, this has retarded development in this line of research. Lee et al. (2009) suggested a simplistic fix to this problem by using static initialization for W and H. Experiment using DUC2002 1 data-set with varying initial seed values for W and H shows that the best initialization value is document specific (Fig. 1) . Hence, fixed initialization of factor matrices is not a prudent idea.", "cite_spans": [ { "start": 273, "end": 290, "text": "Lee et al. (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 521, "end": 529, "text": "(Fig. 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Instability of NMF for Text Summarization:", "sec_num": null }, { "text": "Another fix for the problem is to use NNDSVD (Non-negative Double Singular Value Decomposition (Boutsidis and Gallopoulos, 2008) ) based initialization for NMF factor matrices. Our earlier work (Khurana and Bhatnagar, 2019) establishes that this initialization method improves the sum- mary quality over fixed initialization for several benchmark data-sets.", "cite_spans": [ { "start": 95, "end": 128, "text": "(Boutsidis and Gallopoulos, 2008)", "ref_id": "BIBREF2" }, { "start": 194, "end": 223, "text": "(Khurana and Bhatnagar, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Instability of NMF for Text Summarization:", "sec_num": null }, { "text": "Random initialization has been exploited by clustering and topic modeling researchers to create ensembles (Greene et al., 2008; Belford et al., 2018; Qiang et al., 2018) . Ensembling is a machine learning technique, which combines (multiple) varying base models to construct a consensus model, which is expected to perform better than individual base models.", "cite_spans": [ { "start": 106, "end": 127, "text": "(Greene et al., 2008;", "ref_id": "BIBREF4" }, { "start": 128, "end": 149, "text": "Belford et al., 2018;", "ref_id": "BIBREF1" }, { "start": 150, "end": 169, "text": "Qiang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles", "sec_num": "1.2" }, { "text": "Effectiveness of NMF ensembles in text analytics, specifically in topic modeling (Belford et al., 2018; Qiang et al., 2018) , motivated the current research. We initiated the study with the aim to leverage stochastic variations in NMF factors and resulting diverse base summaries, to combine into stable ensemble summary.", "cite_spans": [ { "start": 81, "end": 103, "text": "(Belford et al., 2018;", "ref_id": "BIBREF1" }, { "start": 104, "end": 123, "text": "Qiang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles", "sec_num": "1.2" }, { "text": "Extrapolating earlier studies, we expected that NMF ensemble summary, smoothed over multifarious summaries obtained by randomly initialized NMF factors, will accomplish higher ROUGE scores. However, our investigations establish that NMF ensembles are not effective. Rather, despite heavy overhead of creating multiple base models and combining them, NMF ensemble often perform worse than the best base model for single document extractive summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles", "sec_num": "1.2" }, { "text": "Ensemble methods are employed in supervised, semi-supervised and unsupervised learning settings. In all scenarios, they comprise two phases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles for Text Summarization", "sec_num": "2" }, { "text": "In the first phase, diverse base models are generated. Diversity in models is recognized to be the key factor for improvement (Kuncheva and Hadjitodorov, 2004) , which is commonly sourced from variations in base algorithm, algorithmic parameters or data itself.", "cite_spans": [ { "start": 126, "end": 159, "text": "(Kuncheva and Hadjitodorov, 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles for Text Summarization", "sec_num": "2" }, { "text": "In second phase, multiple base models are combined using a consensus (aka integration) function. Wide variety of choices for creating diversity and combining base models gives rise to numerous possibilities for creating ensembles (Zhou, 2012) .", "cite_spans": [ { "start": 230, "end": 242, "text": "(Zhou, 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "NMF Ensembles for Text Summarization", "sec_num": "2" }, { "text": "Repeated application of NMF on the term-sentence matrix leads to generation of multiple base models. In the present context, we achieve diversity using two methods. (i) Repeated factorization 2 of A using NMF with random initial seed values for W and H, and (ii) Repeated factorization by varying the number of latent topics into which the document is decomposed, as suggested in (Greene et al., 2008) . This strategy implicitly embeds variations that arise out of random initialization.", "cite_spans": [ { "start": 380, "end": 401, "text": "(Greene et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Generation of diverse base models", "sec_num": "2.1" }, { "text": "In practice, variation in choice of NMF solvers and initialization methods is also a source of diversity. We however, refrain from following this direction because of weak scientific ground.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation of diverse base models", "sec_num": "2.1" }, { "text": "We examine six combining methods, in increasing order of complexities, to generate consensus summaries. First three are simple aggregation methods, where sentence scores are combined directly. Next two methods are based on rank manipulation of the scored sentences. Finally, we use Stacking, which is a sophisticated combining method (Zhou, 2012; Belford et al., 2018) . i. Average: We calculate the consensus score of each sentence in the document by averaging sentence scores over all base models, and use it for summary sentence selection. ii. Median: Since average is sensitive to outliers, we calculate median score of each sentence across all base models and use it for summary sentence selection. iii. Quartile: We obtain consensus score by considering third quartile of the sentence scores across all base models, and use it for summary sentence selection. iv. Voting: We rank sentences based on their scores. Consensus rank of a sentence is the most frequent rank (majority) of the sentence amongst base models. Top ranking sentences are selected for summary. v. Ranking: We count the number of times a sentence appears among the top k scoring sentences (k is the desired number of sentences). Finally, sentences are ranked based on the frequency of appearing among top-k ranked sentences. Top scoring sentences are included in summary. vi. Stacking: Stacking is a well established combining method, which combines base level models to create a meta-training set (Zhou, 2012) . Subsequently, ensemble model is trained on this metatraining set. We stack topic-term (W T ) matrices (base level models) as meta-training set ( W ) for producing the stacked ensemble (Belford et al., 2018) . W matrix is factorized using NMF with NNDSVD initialization to obtain ensembled topicterm matrix, which along with A is used for scoring sentences using term-oriented sentence selection method, NMF-TR, proposed in (Khurana and Bhatnagar, 2019) . Finally, top-k scoring sentences are included in summary.", "cite_spans": [ { "start": 334, "end": 346, "text": "(Zhou, 2012;", "ref_id": "BIBREF15" }, { "start": 347, "end": 368, "text": "Belford et al., 2018)", "ref_id": "BIBREF1" }, { "start": 1472, "end": 1484, "text": "(Zhou, 2012)", "ref_id": "BIBREF15" }, { "start": 1671, "end": 1693, "text": "(Belford et al., 2018)", "ref_id": "BIBREF1" }, { "start": 1910, "end": 1939, "text": "(Khurana and Bhatnagar, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Combining NMF base models", "sec_num": "2.2" }, { "text": "In this section, we present extensive experimentation 3 carried out to investigate the performance of NMF ensembles for extractive summarization. First, we evaluate the performance based on combining methods described in Sec. 2 and the sizes of ensemble. Next, we study the effect of diversity in base models on ensemble performance. Finally, we test the statistical significance of our results. All experiments are performed on DUC2001 4 dataset consisting of 308 documents and DUC2002 1 data-set. We report macro averaged ROUGE recall scores (R-1: ROUGE-1, R-2: ROUGE-2, R-L: ROUGE-L) (Lin, 2004) .", "cite_spans": [ { "start": 587, "end": 598, "text": "(Lin, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "3" }, { "text": "In interest of brevity, all results are reported as performance gain (+) or loss (-) over NMF-TR scores as baseline method. Table 1 shows ROUGE scores of this baseline for DUC2001 and DUC2002 data-sets reported in (Khurana and Bhatnagar, 2019 (Khurana and Bhatnagar, 2019) .", "cite_spans": [ { "start": 214, "end": 242, "text": "(Khurana and Bhatnagar, 2019", "ref_id": "BIBREF5" }, { "start": 243, "end": 272, "text": "(Khurana and Bhatnagar, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "3" }, { "text": "Primary objective of this experiment is to examine the comparative performance of model integration methods. However, size of an ensemble is a crucial factor that determines the quantum of performance gain. Oversized ensembles have obvious computational and memory overheads, while undersized ensembles run the risk of little performance gains and reduced stability. Ergo, we evaluate the performance of each combining method on ensembles of varying sizes. We compute macro-averaged ROUGE recall scores, each score averaged over ten executions to combat random variations. Table 2 shows the performance differential for six combining methods for ten different sizes for DUC2002 data-set. A cursory glance is sufficient to conclude that there are more negs than pos'. Degradation in performance is most unexpected. Macro-level analyses in the bottom row and rightmost column consolidate the surprise. The bottom row 'Total' shows the number of times the ensemble improves summary quality across all combining methods. For ensemble size 100, there is \u224850% chance (9/18) of improving the summary quality across all methods. The rightmost column 'Total' shows the number of times a combining method improves summary across all sizes for each integration method. It suggests that simple combining methods improve marginally even for large size ensemble.", "cite_spans": [], "ref_spans": [ { "start": 573, "end": 580, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Examining Combining Methods", "sec_num": "3.1" }, { "text": "To confirm the trend, we repeated the same experiment with DUC2001 data-set (Table 3) . Apparently there is better chance of improvement for this data-set using NMF ensembles, but the gain is meagre (less than 0.5 in each case) and does not justify the computational overhead.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 85, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Examining Combining Methods", "sec_num": "3.1" }, { "text": "Consolidating observations from Table 2 & 3, none of the the combining methods yield noticeably better quality summaries than the baseline method. Further, increasing the size of ensemble also does not hold promise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examining Combining Methods", "sec_num": "3.1" }, { "text": "Since ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examining Combining Methods", "sec_num": "3.1" }, { "text": "Since summary scores are sensitive to the number of latent topics into which document is decomposed, varying the number of latent topics while decomposing the term-sentence matrix is a potential source of diversity in NMF base models. We explore two different ways to accomplish this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity in Base Models", "sec_num": "3.2" }, { "text": "Selecting latent topics from range: We create 100 base models with number of latent topics randomly chosen from the range [r, 2r] , where r is determined using method proposed by (Aliguliyev, 2009) . We expect that random initialization and variation in the number of topics would inject diversity in base models. We do not test this method with stacking because it requires stacked matrices to have same number of columns. Results for this experiment (Table 4) quality of consensus summary.", "cite_spans": [ { "start": 122, "end": 129, "text": "[r, 2r]", "ref_id": null }, { "start": 179, "end": 197, "text": "(Aliguliyev, 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 452, "end": 461, "text": "(Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Diversity in Base Models", "sec_num": "3.2" }, { "text": "Varying latent topics over range: Suspecting that repetition in the number of latent topics in previous experiment curb diversity in the ensemble, we attempt to create diversity by generating base models with all values in the range [r, 2r] . Hence, the size of ensemble for this experiment is document specific. Here too, it is not possible to create a stacking ensemble because of different number of latent topics in each base model. Results for this diversity creation method are presented in Table 5 .", "cite_spans": [ { "start": 233, "end": 240, "text": "[r, 2r]", "ref_id": null } ], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Diversity in Base Models", "sec_num": "3.2" }, { "text": "Thus systematically varying the number of topics Table 5 : Performance gain/loss when the number of latent topics are in generated from the range [r, 2r] .", "cite_spans": [ { "start": 146, "end": 153, "text": "[r, 2r]", "ref_id": null } ], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Diversity in Base Models", "sec_num": "3.2" }, { "text": "also fails to infuse diversity and shows no promise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity in Base Models", "sec_num": "3.2" }, { "text": "With no success in injecting diversity in base models, we proceed to perform deeper analysis to diagnose the cause of degradation. We wanted to answer the question 'How many base models are responsible for pulling down the score of consensus summary?'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with best summary", "sec_num": "3.3" }, { "text": "To answer this, we evaluated all base summaries and noted their ROUGE scores. The score of the best base summary was compared against that of ensemble summary and translated to win (if ensemble summary score is higher or equal), and loss otherwise. This exercise was done for all integration methods and ensemble size 30 and 100.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with best summary", "sec_num": "3.3" }, { "text": "Results shown in Table 6 are almost startling. E.g. 23/510 for ensemble size 30 means that out of 533 total documents, for 23 documents the ensemble summary was atleast as good as the best base summary. For 510 documents, ensemble summary score was worse than that of the base summary. Thus NMF ensemble summaries fail miserably to improve quality over the best base summary. ", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Comparison with best summary", "sec_num": "3.3" }, { "text": "We investigate the statistical significance of our results for all combining methods. We employ bootstrap approach recommended in (Dror et al., 2018) , and test the null hypothesis H0: NMF ensemble method performs no worse than the baseline NMF-TR, against the alternative hypothesis, H1: NMF ensemble method performs worse than NMF-TR. For each combining method, we generate one million bootstrap samples from the ROUGE scores of 533 ensemble summaries. We compute the difference in performance w.r.t baseline and estimate p-value as the ratio of number of times ensemble method beats NMF-TR by twice the margin on the bootstrap samples, to the total number of samples. For p-value > 0.05, we reject the null hypothesis. Table 7 shows p-values obtained for each ROUGE metric and combining method. According to the computed p-values (Table 7) , we fail to accept null hypothesis for each combining method and each ROUGE metric. Therefore, NMF ensemble meth- ods are not statistically significantly better than the baseline method.", "cite_spans": [ { "start": 130, "end": 149, "text": "(Dror et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 722, "end": 729, "text": "Table 7", "ref_id": "TABREF11" }, { "start": 833, "end": 842, "text": "(Table 7)", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Statistical Significance of Combining methods", "sec_num": "3.4" }, { "text": "Extensive empirical investigation shows that leveraging stochastic variations due to random initialization of NMF factor matrices for extractive document summarization is not straight-forward. We experimented with different NMF solvers available in (Pedregosa et al., 2011) and found no change in results. In absence of any concrete explanation for degraded performance, we forward two plausible reasons. First, apparently simple combining methods fail to tease apart the differences in term-topic and topic-sentence strengths in the latent space. Possible future investigation in this direction include projection of these matrices in higher dimension, and drawing from the cluster ensemble research to design more sophisticated combining methods.", "cite_spans": [ { "start": 249, "end": 273, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "4" }, { "text": "Second reason is related to the maxim of sentence ranking and selection for extractive document summarization. Combining scores from base models alters sentence ranking, and probably less important sentence get pulled up in the summary. This is most likely to happen with the lowest ranked sentence in the summary. A single bad sentence in the summary can lower down the score substantially. Achieving stable ranks in ensemble technology could be another direction of research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "4" }, { "text": "Well studied data-set for single document summarization available at https://duc.nist.gov consisting of 533 unique documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We calculate r according to the formula proposed in(Aliguliyev, 2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Code is available https://github.com/alkakhurana/NMF-Ensembles 4 Available at https://duc.nist.gov", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A new sentence similarity measure and sentence based extractive technique for automatic text summarization", "authors": [ { "first": "M", "middle": [], "last": "Ramiz", "suffix": "" }, { "first": "", "middle": [], "last": "Aliguliyev", "suffix": "" } ], "year": 2009, "venue": "Expert Systems with Applications", "volume": "36", "issue": "4", "pages": "7764--7772", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramiz M Aliguliyev. 2009. A new sentence similarity measure and sentence based extractive technique for automatic text summarization. Expert Systems with Applications, 36(4):7764-7772.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Stability of topic modeling via matrix factorization", "authors": [ { "first": "Mark", "middle": [], "last": "Belford", "suffix": "" }, { "first": "Brian", "middle": [ "Mac" ], "last": "Namee", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Greene", "suffix": "" } ], "year": 2018, "venue": "Expert Systems with Applications", "volume": "91", "issue": "", "pages": "159--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Belford, Brian Mac Namee, and Derek Greene. 2018. Stability of topic modeling via matrix factor- ization. Expert Systems with Applications, 91:159- 169.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Svd based initialization: A head start for nonnegative matrix factorization", "authors": [ { "first": "Christos", "middle": [], "last": "Boutsidis", "suffix": "" }, { "first": "Efstratios", "middle": [], "last": "Gallopoulos", "suffix": "" } ], "year": 2008, "venue": "Pattern Recognition", "volume": "41", "issue": "4", "pages": "1350--1362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Boutsidis and Efstratios Gallopoulos. 2008. Svd based initialization: A head start for nonneg- ative matrix factorization. Pattern Recognition, 41(4):1350-1362.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The hitchhiker's guide to testing statistical significance in natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1383--1392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ensemble non-negative matrix factorization methods for clustering proteinprotein interactions", "authors": [ { "first": "Derek", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "Cagney", "suffix": "" }, { "first": "Nevan", "middle": [], "last": "Krogan", "suffix": "" }, { "first": "P\u00e1draig", "middle": [], "last": "Cunningham", "suffix": "" } ], "year": 2008, "venue": "Bioinformatics", "volume": "24", "issue": "15", "pages": "1722--1728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derek Greene, Gerard Cagney, Nevan Krogan, and P\u00e1draig Cunningham. 2008. Ensemble non-negative matrix factorization methods for clustering protein- protein interactions. Bioinformatics, 24(15):1722- 1728.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Extractive document summarization using non-negative matrix factorization", "authors": [ { "first": "Alka", "middle": [], "last": "Khurana", "suffix": "" }, { "first": "Vasudha", "middle": [], "last": "Bhatnagar", "suffix": "" } ], "year": 2019, "venue": "International Conference on Database and Expert Systems Applications", "volume": "", "issue": "", "pages": "76--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alka Khurana and Vasudha Bhatnagar. 2019. Extrac- tive document summarization using non-negative matrix factorization. In International Conference on Database and Expert Systems Applications, pages 76-90. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using diversity in cluster ensembles", "authors": [ { "first": "L", "middle": [ "I" ], "last": "Kuncheva", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Hadjitodorov", "suffix": "" } ], "year": 2004, "venue": "IEEE International Conference on Systems, Man and Cybernetics", "volume": "2", "issue": "", "pages": "1214--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. I. Kuncheva and S. T. Hadjitodorov. 2004. Using di- versity in cluster ensembles. In 2004 IEEE Interna- tional Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), volume 2, pages 1214- 1219 vol.2.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning the parts of objects by non-negative matrix factorization", "authors": [ { "first": "D", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Sebastian Seung", "suffix": "" } ], "year": 1999, "venue": "Nature", "volume": "401", "issue": "6755", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factoriza- tion. Nature, 401(6755):788.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic generic document summarization based on non-negative matrix factorization", "authors": [ { "first": "Ju-Hong", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Daeho", "middle": [], "last": "Chan-Min Ahn", "suffix": "" }, { "first": "", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2009, "venue": "Information Processing & Management", "volume": "45", "issue": "1", "pages": "20--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ju-Hong Lee, Sun Park, Chan-Min Ahn, and Daeho Kim. 2009. Automatic generic document summa- rization based on non-negative matrix factorization. Information Processing & Management, 45(1):20- 34.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Snapshot ensembles of non-negative matrix factorization for stability of topic modeling", "authors": [ { "first": "Jipeng", "middle": [], "last": "Qiang", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yunhao", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Applied Intelligence", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jipeng Qiang, Yun Li, Yunhao Yuan, and Wei Liu. 2018. Snapshot ensembles of non-negative matrix factor- ization for stability of topic modeling. Applied Intel- ligence, pages 1-13.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Document clustering using nonnegative matrix factorization", "authors": [ { "first": "Farial", "middle": [], "last": "Shahnaz", "suffix": "" }, { "first": "W", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Berry", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Pauca", "suffix": "" }, { "first": "", "middle": [], "last": "Plemmons", "suffix": "" } ], "year": 2006, "venue": "Information Processing & Management", "volume": "42", "issue": "2", "pages": "373--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farial Shahnaz, Michael W Berry, V Paul Pauca, and Robert J Plemmons. 2006. Document clustering us- ing nonnegative matrix factorization. Information Processing & Management, 42(2):373-386.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ensemble document clustering using weighted hypergraph generated by nmf", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" }, { "first": "Minoru", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "77--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyuki Shinnou and Minoru Sasaki. 2007. Ensem- ble document clustering using weighted hypergraph generated by nmf. In Proceedings of the 45th An- nual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 77-80.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Local topic discovery via boosted ensemble of nonnegative matrix factorization", "authors": [ { "first": "Sangho", "middle": [], "last": "Suh", "suffix": "" }, { "first": "Jaegul", "middle": [], "last": "Choo", "suffix": "" }, { "first": "Joonseok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Chandan K", "middle": [], "last": "Reddy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4944--4948", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangho Suh, Jaegul Choo, Joonseok Lee, and Chan- dan K Reddy. 2017. Local topic discovery via boosted ensemble of nonnegative matrix factoriza- tion. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4944- 4948. AAAI Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ensemble methods: foundations and algorithms", "authors": [ { "first": "Zhi-Hua", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhi-Hua Zhou. 2012. Ensemble methods: foundations and algorithms. Chapman and Hall/CRC.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Number of documents with highest ROUGE-L recall score for different initial seed values of NMF factor matrices.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "html": null, "num": null, "text": "Macro-averaged ROUGE recall performance of NMF-TR method", "type_str": "table", "content": "" }, "TABREF2": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
& 3 exhibit similar trends, we
choose to perform remaining experiments on
DUC2002 data-set as it clearly demonstrates in-
firmity of NMF ensembles.
" }, "TABREF3": { "html": null, "num": null, "text": "Performance differential in macro-averaged ROUGE recall scores w.r.t NMF-TR for different ensemble sizes and combining methods. is the size of ensemble. '-' indicates that the integration method is not meaningful.", "type_str": "table", "content": "
DUC2001
= 2= 4= 6= 8= 10= 20= 30= 40= 50= 100Total
AverageR-1 R-2-0.328 -0.076-0.158 +0.152-0.011 +0.337+0.066 +0.324+0.054 +0.294+0.052 +0.257+0.177 +0.338+0.105 +0.313+0.070 +0.295+0.135 +0.3417 9
R-L-0.118+0.003+0.154+0.208+0.163+0.158+0.255+0.187+0.151+0.1919
MedianR-1 R-2---0.121 +0.145+0.037 +0.301+0.104 +0.302+0.068 +0.301+0.043 +0.252+0.047 +0.224+0.115 +0.288+0.180 +0.340+0.210 +0.3158 9
R-L--0.002+0.188+0.188+0.140+0.112+0.108+0.196+0.230+0.2328
QuartileR-1 R-2---0.391 -0.014+0.062 +0.274-0.168 +0.205-0.087 +0.199-0.101 +0.223-0.001 +0.226+0.005 +0.266-0.004 +0.255+0.073 +0.2733 8
R-L--0.205+0.243+0.003+0.033+0.055+0.100+0.124+0.120+0.1508
VotingR-1 R-2---0.464 -0.290-0.225 +0.005-0.093 +0.210+0.008 +0.268-0.011 +0.290+0.007 +0.360+0.046 +0.379-0.026 +0.328+0.009 +0.4284 8
R-L--0.192+0.022+0.130+0.189+0.152+0.159+0.202+0.123+0.1848
RankingR-1 R-2---2.299 -1.659-2.295 -1.680-2.180 -1.495-2.158 -1.497-1.768 -1.164-1.474 -0.912-1.607 -1.032-1.391 -0.895-1.052 -0.6620 0
R-L--1.619-1.599-1.508-1.535-1.152-0.849-0.945-0.812-0.5220
StackingR-1 R-2-0.009 +0.216+0.015 +0.250-0.032 +0.228-0.021 +0.201+0.085 +0.333+0.092 +0.271+0.032 +0.300+0.161 +0.344+0.042 +0.311-0.034 +0.2426 10
R-L+0.178+0.164+0.115+0.163+0.252+0.217+0.211+0.279+0.211+0.12710
Total261212141314151314
" }, "TABREF4": { "html": null, "num": null, "text": "Performance differential in macro-averaged ROUGE recall scores w.r.t NMF-TR for different ensemble sizes and combining methods. is the size of ensemble. '-' indicates that the integration method is not meaningful.", "type_str": "table", "content": "" }, "TABREF5": { "html": null, "num": null, "text": "belie our expectation. Thus varying the number of topics does not improve the", "type_str": "table", "content": "
AvgMedQuartVoteRank
R-1-0.005+0.170+0.041+0.017-1.745
R-2-0.095-0.035-0.064-0.126-2.227
R-L+0.002+0.1290.000-0.053-1.622
" }, "TABREF6": { "html": null, "num": null, "text": "Performance differential for random variation in number of latent topics in the range [r, 2r] for ensemble size 100.", "type_str": "table", "content": "" }, "TABREF9": { "html": null, "num": null, "text": "Wins/losses for ensemble summary compared with best base model summary for DUC2002 corpus.", "type_str": "table", "content": "
" }, "TABREF11": { "html": null, "num": null, "text": "Probability values for bootstrap sampling based test of ensemble performance w.r.t baseline.", "type_str": "table", "content": "
" } } } }