ACL-OCL / Base_JSON /prefixD /json /dash /2021.dash-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:29.555752Z"
},
"title": "Semi-supervised Interactive Intent Labeling",
"authors": [
{
"first": "Saurav",
"middle": [],
"last": "Sahay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Intel Labs",
"location": {
"country": "USA"
}
},
"email": "saurav.sahay@intel.com"
},
{
"first": "Eda",
"middle": [
"Okur"
],
"last": "Nagib",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Intel Labs",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Hakim",
"middle": [],
"last": "Lama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Intel Labs",
"location": {
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Building the Natural Language Understanding (NLU) modules of task-oriented Spoken Dialogue Systems (SDS) involves a definition of intents and entities, collection of task-relevant data, annotating the data with intents and entities, and then repeating the same process over and over again for adding any functionality/enhancement to the SDS. In this work, we showcase an Intent Bulk Labeling system where SDS developers can interactively label and augment training data from unlabeled utterance corpora using advanced clustering and visual labeling methods. We extend the Deep Aligned Clustering (Zhang et al., 2021) work with a better backbone BERT model, explore techniques to select the seed data for labeling, and develop a data balancing method using an oversampling technique that utilizes paraphrasing models. We also look at the effect of data augmentation on the clustering process. Our results show that we can achieve over 10% gain in clustering accuracy on some datasets using the combination of the above techniques. Finally, we extract utterance embeddings from the clustering model and plot the data to interactively bulk label the samples, reducing the time and effort for data labeling of the whole dataset significantly.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Building the Natural Language Understanding (NLU) modules of task-oriented Spoken Dialogue Systems (SDS) involves a definition of intents and entities, collection of task-relevant data, annotating the data with intents and entities, and then repeating the same process over and over again for adding any functionality/enhancement to the SDS. In this work, we showcase an Intent Bulk Labeling system where SDS developers can interactively label and augment training data from unlabeled utterance corpora using advanced clustering and visual labeling methods. We extend the Deep Aligned Clustering (Zhang et al., 2021) work with a better backbone BERT model, explore techniques to select the seed data for labeling, and develop a data balancing method using an oversampling technique that utilizes paraphrasing models. We also look at the effect of data augmentation on the clustering process. Our results show that we can achieve over 10% gain in clustering accuracy on some datasets using the combination of the above techniques. Finally, we extract utterance embeddings from the clustering model and plot the data to interactively bulk label the samples, reducing the time and effort for data labeling of the whole dataset significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Acquiring an accurately labeled corpus is necessary for training machine learning (ML) models in various classification applications. Labeling is an expensive and labor-intensive activity requiring annotators to understand the domain well and to label the instances one at a time. In this work, we explore the task of labeling multiple intents visually with the help of a semi-supervised clustering algorithm. The clustering algorithm helps learn an embedding representation of the training data that is well-suited for downstream labeling. In order to label, we further reduce the high dimensional representation using the UMAP (McInnes et al., 2018) . Since utterances are short, uncovering their semantic meaning to group them together is very challenging. SBERT (Reimers and Gurevych, 2019) showed that out-of-the-box BERT (Devlin et al., 2018 ) maps sentences to a vector space that is not very suitable to be used with common measures like cosine-similarity and euclidean distances. This happens because in the BERT network, there is no independent sentence embedding computation, which makes it difficult to derive sentence embeddings. Researchers utilize the mean pooling of word embeddings as an approximate measure of the sentence embedding. However, results show that this practice yields inappropriate sentence embeddings that are often worse than averaging GloVe embeddings (Pennington et al., 2014; Reimers and Gurevych, 2019) . Many researchers have developed sentence embedding methods: Skip-Thought (Kiros et al., 2015) , In-ferSent (Conneau et al., 2017) , USE (Cer et al., 2018) , SBERT (Reimers and Gurevych, 2019) . State-of-the-art SBERT adds a pooling operation to the output of BERT to derive a fixed-sized sentence embedding and fine-tunes a Siamese network on the sentence-pairs from the NLI (Bowman et al., 2015; Williams et al., 2017) and STSb (Cer et al., 2017) datasets.",
"cite_spans": [
{
"start": 629,
"end": 651,
"text": "(McInnes et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 766,
"end": 794,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 827,
"end": 847,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF10"
},
{
"start": 1387,
"end": 1412,
"text": "(Pennington et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 1413,
"end": 1440,
"text": "Reimers and Gurevych, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 1516,
"end": 1536,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1550,
"end": 1572,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 1579,
"end": 1597,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1606,
"end": 1634,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 1818,
"end": 1839,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 1840,
"end": 1862,
"text": "Williams et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 1872,
"end": 1890,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Deep Aligned Clustering (DAC) (Zhang et al., 2021) introduced an effective method for clustering and discovering new intents. DAC transfers the prior knowledge of a limited number of known intents and incorporates a technique to align cluster centroids in successive training epochs. The limited known intents are used to pre-train the model. The authors use the pre-trained BERT model (Devlin et al., 2018) to extract deep intent features, then pre-train the model with a randomly selected subset of labeled data. The pre-trained parameters are used to obtain well-initialized intent representations. K-Means clustering is performed on the extracted intent features along with a method to estimate the number of clusters and the alignment strategy to obtain the final cluster assignments. The K-Means algorithm selects cluster centroids that minimize the Euclidean distance within the cluster. Due to this Euclidean distance optimization, clustering using the SBERT model to extract feature embeddings naturally outperforms other embedding methods. In our work, we have extended the DAC algorithm with the SBERT as an embedding backbone for clustering of utterances.",
"cite_spans": [
{
"start": 34,
"end": 54,
"text": "(Zhang et al., 2021)",
"ref_id": "BIBREF38"
},
{
"start": 390,
"end": 411,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In semi-supervised learning, the seed set is selected using a sampling strategy: \"A simple random sample of size n consists of n individuals from the population chosen such that every set of n individuals has an equal chance to be the sample actually selected.\" (Moore and McCabe, 1989) . However, these sample subsets may not represent the original data adequately because randomization methods do not exploit the correlations in the original population. In a stratified random sample, the population is classified first into groups (called strata) with similar characteristics. Then a simple random sample is chosen from each strata separately. These simple random samples are combined to form the overall sample. Stratified sampling can help ensure that there are enough observations within each strata to make meaningful inferences. DAC uses the Random Sampling method for seed selection. In this work, we have explored a couple of stratified sampling approaches for seed selection in hope to mitigate the limitations of random sampling and improve the clustering outcome.",
"cite_spans": [
{
"start": 262,
"end": 286,
"text": "(Moore and McCabe, 1989)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another issue we address in this work is class sample imbalance. Seed selection generally yields an imbalanced dataset, which in turn impairs the predictive capability of the classification algorithms (Douzas et al., 2018) . Some methods manipulate the training data, aiming to change the class distribution towards a more balanced one by undersampling or oversampling (Kotsiantis et al., 2006; Galar et al., 2011) . SMOTE (Chawla et al., 2002) is a popular oversampling technique proposed to improve random oversampling. In one variant of SMOTE, borderline minority instances are heuristically selected and linearly interpolated to create synthetic samples. In this work, we take inspiration from the SMOTE method and choose borderline minority instances and paraphrase them using a Sequence to Sequence Paraphrasing model. The paraphrases provide natural and meaningful augmentations of the dataset that are not synthetic.",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "(Douzas et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 369,
"end": 394,
"text": "(Kotsiantis et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 395,
"end": 414,
"text": "Galar et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 417,
"end": 444,
"text": "SMOTE (Chawla et al., 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has shown that data augmentation can boost performance on text classification tasks (Barzilay and McKeown, 2001; Dolan and Brockett, 2005; Lan et al., 2017; Hu et al., 2019) . Wieting et al. (2017) used Neural Machine Translation (NMT) (Sutskever et al., 2014) to translate the non-English side of the parallel text to get English-English paraphrase pairs. This method has been scaled to generate large paraphrase corpora (Wieting and Gimpel, 2018) . Prior work in learning paraphrases has used autoencoders (Socher et al., 2011) , encoder-decoder architectures as in BART , and other learning frameworks such as NMT (Sokolov and Filimonov, 2020) . Data augmentation using paraphrasing is a simple yet effective strategy that we explored in this work to improve the clustering.",
"cite_spans": [
{
"start": 98,
"end": 126,
"text": "(Barzilay and McKeown, 2001;",
"ref_id": "BIBREF2"
},
{
"start": 127,
"end": 152,
"text": "Dolan and Brockett, 2005;",
"ref_id": "BIBREF11"
},
{
"start": 153,
"end": 170,
"text": "Lan et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 171,
"end": 187,
"text": "Hu et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 190,
"end": 211,
"text": "Wieting et al. (2017)",
"ref_id": "BIBREF35"
},
{
"start": 250,
"end": 274,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 436,
"end": 462,
"text": "(Wieting and Gimpel, 2018)",
"ref_id": "BIBREF34"
},
{
"start": 522,
"end": 543,
"text": "(Socher et al., 2011)",
"ref_id": "BIBREF28"
},
{
"start": 631,
"end": 660,
"text": "(Sokolov and Filimonov, 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For interactive visual labeling of utterances, we build up from the learnt embedding representation of the data and fine-tune it using the clustering. DAC learns to cluster with a weak self-supervised signal to update its representation and to optimize both local (via K-Means) and global information (via cluster alignment). This results in an optimized intent-level feature representation. This high dimensional latent representation can be reduced to 2-3 dimensions using the Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) . We use Rasa WhatLies 1 library (Warmerdam et al., 2020) to extract the UMAP embeddings. For interactive labeling, we utilize an interactive visualization library called Human Learn 2 (Warmerdam et al., 2021) that allows us to draw decision boundaries on a plot. By building on top of the work of Rasa Bulk Labelling 3 UI ; Bokeh Development Team, 2018), we augment the interface with our learnt representation for interactive labeling. Although we focus on NLU, other studies like 'Conversation Learner' (Shukla et al., 2020) focus on interactive dialogue managers (DM) with human-in-the-loop annotations of dialogue data via machine teaching. Note also that although the majority of task-oriented SDS still involves defining intents/entities, there are recent examples that argue for a richer target representation than the classical intent/entity model, such as SMCalFlow (Andreas et al., 2020) . Figure 1 describes the semi-supervised labeling process. We start with the unlabeled utterance corpus and apply seed sampling methods to select a small subset of the corpus. Once the selected subset is manually labeled, we address the data imbalance with our paraphrase-based minority oversampling method. We can also augment the labeled corpus with paraphrasing to provide more data for the clustering process. The DAC algorithm is applied with improved embeddings to extract the utterance representation for interactive labeling.",
"cite_spans": [
{
"start": 532,
"end": 554,
"text": "(McInnes et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1061,
"end": 1082,
"text": "(Shukla et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 1431,
"end": 1453,
"text": "(Andreas et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1456,
"end": 1464,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For sentence representation, we use the Hug-gingFace Transformers model BERT-base-nli-stsbmean-tokens 4 . This model was first fine-tuned on a combination of Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) (570K sentence-pairs with labels contradiction, entailment, and neutral) and Multi-Genre Natural Language Inference (Williams et al., 2017 ) (430K diverse sentence-pairs with same labels as SNLI) datasets, then on Semantic Textual Similarity benchmark (STSb) (Cer et al., 2017 ) (provide labels between 0 and 5 on the semantic relatedness of sentence pairs) training set. This model achieves a performance of 85.14 (Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels) on STSb regression evaluation. For context, the average BERT embeddings achieve a performance of 46.35 on this evaluation (Reimers and Gurevych, 2019).",
"cite_spans": [
{
"start": 339,
"end": 361,
"text": "(Williams et al., 2017",
"ref_id": "BIBREF36"
},
{
"start": 482,
"end": 499,
"text": "(Cer et al., 2017",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "2.1"
},
{
"text": "We explore two selection and sampling strategies for seed selection as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seed Selection",
"sec_num": "2.2"
},
{
"text": "\u2022 Cluster-based Selection (CB): In this method, we apply K-Means clustering on the N utterances to partition the data into n seed number of subsets. For example, if 10% of the data has 100 utterances, this method creates 100 clusters from the dataset. We then pick the centroid's nearest neighbor as part of the seed set for all the clusters. The naive intuition for this strategy is that it would create a large number of clusters spread all over the data distribution (N/n instances per cluster on average for uniformly distributed instances).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seed Selection",
"sec_num": "2.2"
},
{
"text": "\u2022 Predicted Cluster Sampling (PCS): This is a stratified sampling method where we first predict the number of clusters and then sample instances from each cluster. We use the cluster size estimation method from the DAC work as follows: K-Means is performed with a large K (initialized with twice the ground truth number of classes). The assumption is that real clusters tend to be dense and the cluster mean size threshold is assumed to be N/K'. where |S i | is the size of the ith produced cluster, and \u03b4(condition) is an indicator function. It outputs 1 if condition is satisfied, and outputs 0 if not. The method seems to perform well as reported in DAC work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seed Selection",
"sec_num": "2.2"
},
{
"text": "K = K i=1 \u03b4(|S i | >= t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seed Selection",
"sec_num": "2.2"
},
{
"text": "For handling data imbalance, we propose a paraphrasing-based method to over-sample the minority classes. The method is described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "1. For every instance p i (i = 1, 2, ..., p num ) in the minority class P , we calculate its m nearest neighbors from the whole training set T . The number of majority examples among the m nearest neighbors is denoted by m (0 \u2264 m \u2264 m).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "2. If m = m , i.e., all the m nearest neighbors of p i are majority examples, p i is considered to be noise and is not operated in the following steps. If m 2 \u2264 m < m, namely the number of p i 's majority nearest neighbors is larger than the number of its minority ones, p i is considered to be easily misclassified and put into a set DANGER. If 0 \u2264 m < m 2 , p i is safe and does not need to participate in the following steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "3. The examples in DANGER are the borderline data of the minority class P , and we can see that DANGER \u2286 P . We set DANGER =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "{p 1 , p 2 , ..., p dnum }, 0 \u2264 d num \u2264 p num 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "For each borderline data (that can be easily misclassified), we paraphrase the instance. For paraphrasing, we fine-tuned the BART Sequence to Sequence model on a combination of 3 datasets: ParaNMT (Wieting and Gimpel, 2018), PAWS Yang et al., 2019) , and the MSRP corpus (Dolan and Brockett, 2005).",
"cite_spans": [
{
"start": 230,
"end": 248,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "5. We classify the paraphrased sample with a RoBERTa based classifier fine-tuned on the labeled data and only add the instance if the classifier predicts the same label as the minority instance. We call this the 'ParaMote' method in our experiments. Without this last step (5), we call this overall approach our 'Paraphrasing' method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "We use the Paraphrasing model and the classifier as a data augmentation method to augment the labeled training data (refer to as 'Aug' in our experiments). Note that we augment the paraphrased sample if it belongs to the same minority class ('ParaMote') as we do not want to inject noise while solving the data imbalance problem. The opposite is also possible for other purposes such as generating semantically similar adversaries (Ribeiro et al., 2018) .",
"cite_spans": [
{
"start": 431,
"end": 453,
"text": "(Ribeiro et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Balancing and Augmentation",
"sec_num": "2.3"
},
{
"text": "To conduct our experiments, we use the BANK-ING (Casanueva et al., 2020) and CLINC (Larson et al., 2019) datasets similar to the DAC work (Zhang et al., 2021) . We also use another dataset called KidSpace that includes utterances from a Multimodal Learning Application for 5-to-8 years-old children (Sahay et al., 2019; Anderson et al., 2018) . We hope to utilize this system to label future utterances into relevant intents. Table 1 shows the statistics of the 3 datasets where 25% random classes are kept unseen at pre-training.",
"cite_spans": [
{
"start": 48,
"end": 72,
"text": "(Casanueva et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 138,
"end": 158,
"text": "(Zhang et al., 2021)",
"ref_id": "BIBREF38"
},
{
"start": 299,
"end": 319,
"text": "(Sahay et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 320,
"end": 342,
"text": "Anderson et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 426,
"end": 433,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "The choice of pre-trained embeddings has the largest impact on the clustering results. We observe huge performance gains for the single domain KidSpace and BANKING datasets. For the multidomain and diverse CLINC dataset with the largest number of intents, we saw a slight degradation in performance. While this needs further investigation, we believe the dataset is diverse enough and already has very high clustering scores and that the improved sentence representations may not be helping further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.1"
},
{
"text": "Seed selection is an important problem for limited data tasks. Law of large numbers does not hold and random sampling strategy may lead to larger variance in outcomes. We explored Cluster-based Selection (CB) and Predicted Cluster Sampling (PCS) besides other techniques (see detailed results in Appendix A.1). Our results trend towards smaller standard deviations and similar performance for the BANKING and CLINC datasets with the PCS method. Surprisingly, this does not hold for the KidSpace dataset that needs further investigation. Figure 2 shows the KidSpace data visualised with various colored clusters and centroids. While we non-randomly choose seed data, we still hide 25% of the classes at random (to enable unknown intent discovery). Our recommendation is to use PCS if one cannot run the training multiple times for certain situations to have less variance in results. Figure 3 shows the histogram for the seed data, which is highly imbalanced and may adversely impact the clustering performance. We apply Paraphrasing and ParaMote methods to balance the data. Paraphrasing almost always improves the performance while the additional classifier to check for class-label consistency (ParaMote) does not help. ",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 545,
"text": "Figure 2",
"ref_id": null
},
{
"start": 883,
"end": 891,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Seed Selection",
"sec_num": "3.2"
},
{
"text": "We augmented the entire labeled data including the majority class using Paraphrasing (with classlabel consistency) by 3x in our experiments. We aimed to understand if this could help get a better pre-trained model that could eventually improve the clustering outcome. We do not observe any performance gains with the augmentation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "Our goal in this work is to develop a wellsegmented learnt representation of the data with deep clustering and then to use the learnt representation to enable fast visual labeling. Figure 4 shows the two clustered representations, one without pretraining and BERT-base embedding while the other with a fine-tuned sentence BERT representation and pre-training. We can obtain well separated visual clusters using the latter approach. We use the drawing library human-learn to visually label the data. Figure 5 shows selected region of the data with various labels and class confusion. We notice that this representation not only helps with the labeling but also helps with correcting the labels and identify utterances that belong to multiple classes which cannot be easily segmented. For example, 'children-valid-answer' and 'children-invalid-grow' (invalid answers) contain semantically similar content depending on the game logic of the interaction. We perhaps need to group these together and use an alternative logic for implementing game semantics.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 499,
"end": 507,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactive Data Labeling",
"sec_num": "3.5"
},
{
"text": "In this exploration, we have used fine-tuned sentence BERT model to significantly improve the clustering performance. Predicted Cluster Sampling strategy for seed data selection seems to be a promising approach with possibly lower variance in clustering performance for smaller data labeling tasks. Paraphrasing-based data imbalance handling slightly improves the clustering performance as well. Finally, we have utilized the learnt representation to develop a visual intent labeling system. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "3.6"
},
{
"text": "rasahq.github.io/whatlies/ 2 koaning.github.io/human-learn/ 3 github.com/RasaHQ/rasalit/blob/ main/notebooks/bulk-labelling/ bulk-labelling-ui.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/sentence-transformers/bert-basenli-stsb-mean-tokens/tree/main",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "A Appendix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "In addition to the Cluster-based Selection (CB) and Predicted Cluster Sampling (PCS) methods, we have explored other seed selection techniques compared with the Random Sampling. These are the Known Cluster-based Selection (KCB) and Clusterbased Sentence Embedding (CSE) methods. KCB is a variation of CB where we cluster into a number of known labels' subsets (based on known class ratio) and pick up certain % of data (based on labeled ratio) from each cluster's data points. CSE, on the other hand, is another variation of CB where, instead of BERT word embeddings as the pre-trained representations, we use the sentence embeddings model before running K-Means (the rest is the same as the CB method). Table 3 presents detailed clustering performance results on three datasets using all five seed selection methods we explored, with varying labeled ratio and BERT embeddings (standard/BERT-base vs. sentence/SBERT models). In Table 4 , we expand our analysis on the KidSpace dataset with data balancing/augmentation approaches on top of these five seed selection methods, once again with standard/sentence BERT embeddings. Table 5 presents additional results on the BANKING dataset to compare data balancing/augmentation methods on top of standard vs. the sentence BERT representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 704,
"end": 711,
"text": "Table 3",
"ref_id": null
},
{
"start": 928,
"end": 935,
"text": "Table 4",
"ref_id": null
},
{
"start": 1125,
"end": 1132,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.1 Additional Experimental Results",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Kid space: Interactive learning in a smart environment",
"authors": [
{
"first": "Glen",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Selvakumar",
"middle": [],
"last": "Panneer",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"S"
],
"last": "Marshall",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Chierichetti",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Raffa",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sherry",
"suffix": ""
},
{
"first": "Daria",
"middle": [],
"last": "Loi",
"suffix": ""
},
{
"first": "Lenitra Megail",
"middle": [],
"last": "Durham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Group Interaction Frontiers in Technology, GIFT'18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3279981.3279986"
]
},
"num": null,
"urls": [],
"raw_text": "Glen J. Anderson, Selvakumar Panneer, Meng Shi, Carl S. Marshall, Ankur Agrawal, Rebecca Chierichetti, Giuseppe Raffa, John Sherry, Daria Loi, and Lenitra Megail Durham. 2018. Kid space: Interactive learning in a smart environment. In Pro- ceedings of the Group Interaction Frontiers in Tech- nology, GIFT'18, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Task-Oriented Dialogue as Dataflow Synthesis",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bufe",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Burkett",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Clausman",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Crawford",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Crim",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Deloach",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Dorner",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Kristin",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Iwaszuk",
"suffix": ""
},
{
"first": "Smriti",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Lanman",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"H"
],
"last": "Lin",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Lintsbakh",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "556--571",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00333"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy Mc- Govern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. Task-Oriented Dialogue as Dataflow Synthesis. Transactions of the Association for Com- putational Linguistics, 8:556-571.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {
"DOI": [
"10.3115/1073012.1073020"
]
},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R. McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of the 39th Annual Meeting of the Associ- ation for Computational Linguistics, pages 50-57, Toulouse, France. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bokeh: Python library for interactive visualization",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bokeh Development Team. 2018. Bokeh: Python li- brary for interactive visualization.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient intent detection with dual sentence encoders",
"authors": [
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Tadas",
"middle": [],
"last": "Tem\u010dinas",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlp4convai-1.5"
]
},
"num": null,
"urls": [],
"raw_text": "I\u00f1igo Casanueva, Tadas Tem\u010dinas, Daniela Gerz, Matthew Henderson, and Ivan Vuli\u0107. 2020. Efficient intent detection with dual sentence encoders. In Pro- ceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38-45, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semeval-2017 task 1: Semantic textual similarity -multilingual and cross-lingual focused evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Cer",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Cer, Mona T. Diab, Eneko Agirre, I\u00f1igo Lopez-Gazpio, and Lucia Specia. 2017. Semeval- 2017 task 1: Semantic textual similarity -multilin- gual and cross-lingual focused evaluation. CoRR, abs/1708.00055.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Smote: synthetic minority over-sampling technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W Philip",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of artificial intelligence research",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving imbalanced learning through a heuristic oversampling method based on k-means and smote",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Douzas",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Bacao",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Last",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Sciences",
"volume": "465",
"issue": "",
"pages": "1--20",
"other_ids": {
"DOI": [
"10.1016/j.ins.2018.06.056"
]
},
"num": null,
"urls": [],
"raw_text": "Georgios Douzas, Fernando Bacao, and Felix Last. 2018. Improving imbalanced learning through a heuristic oversampling method based on k-means and smote. Information Sciences, 465:1-20.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Galar",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Fernandez",
"suffix": ""
},
{
"first": "Edurne",
"middle": [],
"last": "Barrenechea",
"suffix": ""
},
{
"first": "Humberto",
"middle": [],
"last": "Bustince",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Herrera",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)",
"volume": "42",
"issue": "4",
"pages": "463--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Galar, Alberto Fernandez, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. 2011. A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based ap- proaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):463-484.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Largescale, diverse, paraphrastic bitexts via sampling and clustering",
"authors": [
{
"first": "J",
"middle": [
"Edward"
],
"last": "Hu",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Holzenberger",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "44--54",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1005"
]
},
"num": null,
"urls": [],
"raw_text": "J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019. Large- scale, diverse, paraphrastic bitexts via sampling and clustering. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 44-54, Hong Kong, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Handling imbalanced datasets: A review",
"authors": [
{
"first": "Sotiris",
"middle": [],
"last": "Kotsiantis",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Kanellopoulos",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Pintelas",
"suffix": ""
}
],
"year": 2006,
"venue": "GESTS International Transactions on Computer Science and Engineering",
"volume": "30",
"issue": "1",
"pages": "25--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas, et al. 2006. Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 30(1):25-36.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A continuously growing dataset of sentential paraphrases",
"authors": [
{
"first": "Wuwei",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Siyu",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential para- phrases. CoRR, abs/1708.00391.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An evaluation dataset for intent classification and out-of-scope prediction",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Larson",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"J"
],
"last": "Peper",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Leach",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"A"
],
"last": "Laurenzano",
"suffix": ""
},
{
"first": "Lingjia",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Mars",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1311--1316",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1131"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. CoRR, abs/1910.13461.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction",
"authors": [
{
"first": "L",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Healy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. McInnes, J. Healy, and J. Melville. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e-prints.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to the practice of statistics",
"authors": [
{
"first": "David",
"middle": [
"S"
],
"last": "Moore",
"suffix": ""
},
{
"first": "George",
"middle": [
"P"
],
"last": "Mccabe",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David S. Moore and George P. McCabe. 1989. Intro- duction to the practice of statistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semantically equivalent adversarial rules for debugging NLP models",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "856--865",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1079"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling intent, dialog policies and response adaptation for goaloriented interactions",
"authors": [
{
"first": "Saurav",
"middle": [],
"last": "Sahay",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shachi",
"suffix": ""
},
{
"first": "Eda",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Okur",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saurav Sahay, Shachi H. Kumar, Eda Okur, Haroon Syed, and Lama Nachman. 2019. Modeling intent, dialog policies and response adaptation for goal- oriented interactions. In Proceedings of the 23rd Workshop on the Semantics and Pragmatics of Di- alogue, London, United Kingdom. SEMDIAL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Conversation Learner -a machine teaching tool for building dialog managers for task-oriented dialog systems",
"authors": [
{
"first": "Swadheen",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Liden",
"suffix": ""
},
{
"first": "Shahin",
"middle": [],
"last": "Shayandeh",
"suffix": ""
},
{
"first": "Eslam",
"middle": [],
"last": "Kamal",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Mazzola",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "343--349",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.39"
]
},
"num": null,
"urls": [],
"raw_text": "Swadheen Shukla, Lars Liden, Shahin Shayandeh, Es- lam Kamal, Jinchao Li, Matt Mazzola, Thomas Park, Baolin Peng, and Jianfeng Gao. 2020. Conversation Learner -a machine teaching tool for building dialog managers for task-oriented dialog systems. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 343-349, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11",
"volume": "",
"issue": "",
"pages": "801--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11, page 801-809, Red Hook, NY, USA. Curran Assoc. Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural machine translation for paraphrase generation",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Filimonov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.14223"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Sokolov and Denis Filimonov. 2020. Neural ma- chine translation for paraphrase generation. arXiv preprint arXiv:2006.14223.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Going beyond T-SNE: Exposing whatlies in text embeddings",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Warmerdam",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kober",
"suffix": ""
},
{
"first": "Rachael",
"middle": [],
"last": "Tatman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "52--60",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlposs-1.8"
]
},
"num": null,
"urls": [],
"raw_text": "Vincent Warmerdam, Thomas Kober, and Rachael Tatman. 2020. Going beyond T-SNE: Exposing whatlies in text embeddings. In Proceedings of Sec- ond Workshop for NLP Open Source Software (NLP- OSS), pages 52-60, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Rasa algorithm whiteboard -bulk labelling ui. The relevant notebook can be found on GitHub",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Warmerdam",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent D. Warmerdam. 2020. Rasa algorithm whiteboard -bulk labelling ui. The relevant notebook can be found on GitHub: https: //github.com/RasaHQ/rasalit/blob/ main/notebooks/bulk-labelling/ bulk-labelling-ui.ipynb.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1042"
]
},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT- 50M: Pushing the limits of paraphrastic sentence em- beddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning paraphrastic sentence embeddings from back-translated bitext",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "274--285",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1026"
]
},
"num": null,
"urls": [],
"raw_text": "John Wieting, Jonathan Mallinson, and Kevin Gim- pel. 2017. Learning paraphrastic sentence embed- dings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 274-285, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. CoRR, abs/1704.05426.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Adver- sarial Dataset for Paraphrase Identification. In Proc. of EMNLP.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Discovering new intents with deep aligned clustering",
"authors": [
{
"first": "Hanlei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Ting-En Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lyu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021. Discovering new intents with deep aligned clustering. In Proceedings of the AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "PAWS: Paraphrase Adversaries from Word Scrambling",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase Adversaries from Word Scram- bling. In Proc. of NAACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Interactive Labeling System Architecture 2 Methodology",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Figure 2: Cluster Visualization",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Cluster Visualization on KidSpace with BERT-base/SBERT w/wo pre-training Figure 5: Cluster Mixup on KidSpace due to Game Semantics",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Dataset Statistics"
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Semi-supervised DeepAlign Clustering Results with BERT Model, Data Balance/Augmentation and Seed Selection on BANKING, CLINC, and KidSpace datasets (averaged results over 10 runs with different seed values; labeled ratio is 0.1 for BANKING and CLINC, 0.2 for KidSpace; known class ratio is 0.75 in all cases)"
},
"TABREF5": {
"content": "<table><tr><td>Dataset</td><td>BERT</td><td>Data Bal/Aug</td><td>Seed Selection</td><td colspan=\"2\">labeled_ratio NMI</td><td>ARI</td><td>ACC</td></tr><tr><td colspan=\"2\">BANKING Standard</td><td>None</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">79.22 52.96 63.84</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">78.62 51.72 62.72</td></tr><tr><td/><td>Standard</td><td>Paraphrasing</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">79.31 53.31 64.83</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">78.79 52.41 64.62</td></tr><tr><td/><td>Standard</td><td>ParaMote</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">79.62 54.08 65.37</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">79.30 53.08 65.08</td></tr><tr><td/><td>Sentence</td><td>None</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">82.96 60.72 71.27</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">82.11 58.43 69.78</td></tr><tr><td/><td colspan=\"2\">Sentence Paraphrasing</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">83.00 60.95 71.95</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">82.20 58.86 69.62</td></tr><tr><td/><td>Sentence</td><td>ParaMote</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">82.58 59.54 69.92</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">81.88 58.13 69.74</td></tr><tr><td/><td>Sentence</td><td>Aug (3x)</td><td>RandomSampling</td><td>0.1</td><td colspan=\"3\">82.94 60.78 71.66</td></tr><tr><td/><td/><td/><td>PredictedClusterSampling</td><td>0.1</td><td colspan=\"3\">81.69 58.18 69.99</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Semi-supervised DeepAlign Clustering Results with BERT Model, Data Balance/Augmentation and Seed Selection on KidSpace dataset (averaged results over 10 runs with different seed values; known class ratio is 0.75 in all cases)"
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Semi-supervised DeepAlign Clustering Results with BERT Model, Data Balance/Augmentation and Seed Selection on BANKING dataset (averaged results over 10 runs with different seed values; known class ratio is 0.75 in all cases)"
}
}
}
}