The dataset viewer is not available for this split.
The size of the content of the first rows (539881 B) exceeds the maximum supported size (200000 B) even after truncation. Please report the issue.
Error code:   TooBigContentError

Need help to make the dataset viewer work? Open a discussion for direct support.

Sample data snippet for JSON

{
      "paper": {
        "paper_id": "2103.15871",
        "metadata": {
          "id": "2103.15871",
          "submitter": "Varun Kumar",
          "authors": "Luoxin Chen, Francisco Garcia, Varun Kumar, He Xie, Jianhua Lu",
          "title": "Industry Scale Semi-Supervised Learning for Natural Language\n  Understanding",
          "journal_ref": null,
          "doi": null,
          "report_no": null,
          "categories": "cs.CL cs.AI cs.LG",
          "license": "http://creativecommons.org/licenses/by/4.0/",
          "abstract": "  This paper presents a production Semi-Supervised Learning (SSL) pipeline\nbased on the student-teacher framework, which leverages millions of unlabeled\nexamples to improve Natural Language Understanding (NLU) tasks. We investigate\ntwo questions related to the use of unlabeled data in production SSL context:\n1) how to select samples from a huge unlabeled data pool that are beneficial\nfor SSL training, and 2) how do the selected data affect the performance of\ndifferent state-of-the-art SSL techniques. We compare four widely used SSL\ntechniques, Pseudo-Label (PL), Knowledge Distillation (KD), Virtual Adversarial\nTraining (VAT) and Cross-View Training (CVT) in conjunction with two data\nselection methods including committee-based selection and submodular\noptimization based selection. We further examine the benefits and drawbacks of\nthese techniques when applied to intent classification (IC) and named entity\nrecognition (NER) tasks, and provide guidelines specifying when each of these\nmethods might be beneficial to improve large scale NLU systems.\n",
          "versions": [
            { "version": "v1", "created": "Mon, 29 Mar 2021 18:24:02 GMT" }
          ],
          "update_date": "2021-03-31",
          "authors_parsed": [
            ["Chen", "Luoxin", null],
            ["Garcia", "Francisco", null],
            ["Kumar", "Varun", null],
            ["Xie", "He", null],
            ["Lu", "Jianhua", null]
          ]
        },
        "discipline": "Computer Science",
        "abstract": {
          "section": "Abstract",
          "text": "  This paper presents a production Semi-Supervised Learning (SSL) pipeline\nbased on the student-teacher framework, which leverages millions of unlabeled\nexamples to improve Natural Language Understanding (NLU) tasks. We investigate\ntwo questions related to the use of unlabeled data in production SSL context:\n1) how to select samples from a huge unlabeled data pool that are beneficial\nfor SSL training, and 2) how do the selected data affect the performance of\ndifferent state-of-the-art SSL techniques. We compare four widely used SSL\ntechniques, Pseudo-Label (PL), Knowledge Distillation (KD), Virtual Adversarial\nTraining (VAT) and Cross-View Training (CVT) in conjunction with two data\nselection methods including committee-based selection and submodular\noptimization based selection. We further examine the benefits and drawbacks of\nthese techniques when applied to intent classification (IC) and named entity\nrecognition (NER) tasks, and provide guidelines specifying when each of these\nmethods might be beneficial to improve large scale NLU systems.\n",
          "cite_spans": [],
          "ref_spans": []
        },
        "bib_entries": {
          "d5160b73131a9eab9d63ddd96ab549ca183fc019": {
            "bib_entry_raw": "Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020. Knowledge distillation from internal representations. In AAAI, pages 7350–7357.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": "Computer Science",
            "ids": {
              "open_alex_id": "https://openalex.org/W2997666887",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": "10.1609/aaai.v34i05.6229"
            }
          },
          "d81f85d95d72fdc9f4a428b794712d0570fea6f2": {
            "bib_entry_raw": "Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. 2020. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning.",
            "contained_arXiv_ids": [
              {
                "id": "2002.06470",
                "text": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning.",
                "start": 78,
                "end": 155
              }
            ],
            "contained_links": [],
            "discipline": "Computer Science",
            "ids": {
              "open_alex_id": "https://openalex.org/W2995464762",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": null
            }
          },
          "03e89d2071878c90439d6210f8f5694236144b11": {
            "bib_entry_raw": "David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pages 5049–5059.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": "",
            "ids": {
              "open_alex_id": "https://openalex.org/W2978426779",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": null
            }
          },
          "34b790c0eff9787af15534bd9727ba18611a803f": {
            "bib_entry_raw": "Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92–100.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": "Computer Science",
            "ids": {
              "open_alex_id": "https://openalex.org/W2048679005",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": "10.1145/279943.279962"
            }
          },
          "b0f70c4792d483a662ad260d63c46bc281aff4ae": {
            "bib_entry_raw": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information.",
            "contained_arXiv_ids": [
              {
                "id": "1607.04606",
                "text": "Enriching word vectors with subword information.",
                "start": 73,
                "end": 121
              }
            ],
            "contained_links": [],
            "discipline": "Computer Science",
            "ids": {
              "open_alex_id": "https://openalex.org/W2493916176",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": "10.1162/tacl_a_00051"
            }
          },
          "e3f43c42b178886fe6a273c8a515489709acb130": {
            "bib_entry_raw": "Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. 2006. Introduction to semi-supervised learning. In Semi-Supervised Learning.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": "Computer Science",
            "ids": {
              "open_alex_id": "https://openalex.org/W2506128341",
              "arxiv_id": null,
              "pubmed_id": null,
              "pmc_id": null,
              "doi": null
            }
          },
          "98318a040e62c35adf72d55b882beca7b69653b9": {
            "bib_entry_raw": "Luoxin Chen, Weitong Ruan, Xinyue Liu, and Jianhua Lu. 2020. SeqVAT: Virtual adversarial training for semi-supervised sequence labeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8801–8811, Online. Association for Computational Linguistics.",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.18653/v1/2020.acl-main.777",
                "text": "SeqVAT: Virtual adversarial training for semi-supervised sequence labeling. In",
                "start": 61,
                "end": 139
              }
            ],
            "discipline": null,
            "ids": null
          },
          "0cc0f236f438e0b67d0f42d5c316d96b101f8287": {
            "bib_entry_raw": "Eunah Cho, He Xie, John P. Lalor, Varun Kumar, and William M. Campbell. 2019. Efficient semi-supervised learning for natural language understanding by optimizing diversity. 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.1109/asru46091.2019.9003747",
                "text": "Efficient semi-supervised learning for natural language understanding by optimizing diversity.",
                "start": 78,
                "end": 172
              }
            ],
            "discipline": null,
            "ids": null
          },
          "3e267afacd645acd58cf04cf7a628c50e12a9329": {
            "bib_entry_raw": "Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1914–1925. Association for Computational Linguistics.",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.18653/v1/d18-1217",
                "text": "Semi-supervised sequence modeling with cross-view training. In",
                "start": 77,
                "end": 139
              }
            ],
            "discipline": null,
            "ids": null
          },
          "df2545b3a0bf92a4c492db04ec6d7142b197d6eb": {
            "bib_entry_raw": "Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. ArXiv, abs/1805.10190.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "7c58e3eafed3afe2c059d1b766808fcbd350e2bd": {
            "bib_entry_raw": "Yifan Ding, Liqiang Wang, Deliang Fan, and Boqing Gong. 2018. A semi-supervised two-stage approach to learning from noisy labels. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.1109/wacv.2018.00138",
                "text": "A semi-supervised two-stage approach to learning from noisy labels.",
                "start": 62,
                "end": 129
              }
            ],
            "discipline": null,
            "ids": null
          },
          "d56a30018f53ddb94b8f0bc778bf422fa0c3aa96": {
            "bib_entry_raw": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "3f88da7c4bf7953e0744300aa749ad4ea51ccf64": {
            "bib_entry_raw": "Heng Ji and Ralph Grishman. 2006. Data selection in semi-supervised learning for name tagging. In Proceedings of the Workshop on Information Extraction Beyond The Document, pages 48–55, Sydney, Australia. Association for Computational Linguistics.",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://www.aclweb.org/anthology/W06-0206",
                "text": "Data selection in semi-supervised learning for name tagging. In",
                "start": 34,
                "end": 97
              }
            ],
            "discipline": null,
            "ids": null
          },
          "bd09f69b1c6b8444ac39821d741e6ab9c7ca505d": {
            "bib_entry_raw": "Katrin Kirchhoff and Jeff Bilmes. 2014. Submodularity for data selection in machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 131–141.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "ce1fd2a2f25b5cf04b392c438b3c6b0df2228815": {
            "bib_entry_raw": "Jeremiah Liu, John W. Paisley, M. Kioumourtzoglou, and B. Coull. 2019a. Accurate uncertainty estimation and decomposition in ensemble learning. In NeurIPS.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "9d69ca232191bab14501dcac52f2f58b9fe70b8a": {
            "bib_entry_raw": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "aac8015edb82c072cece8b4c602023d445ebf1ef": {
            "bib_entry_raw": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics, pages 152–159. Association for Computational Linguistics.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "6afa85c303a47aeb1fb8a9267575fea1ff031bcd": {
            "bib_entry_raw": "Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://openreview.net/forum?id=r1X3g2_xl",
                "text": "Adversarial training methods for semi-supervised text classification. In",
                "start": 59,
                "end": 131
              }
            ],
            "discipline": null,
            "ids": null
          },
          "cf0cfc9674d128c71bd1466afe94cc889673cc14": {
            "bib_entry_raw": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979–1993.",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.1109/TPAMI.2018.2858821",
                "text": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning.",
                "start": 71,
                "end": 169
              }
            ],
            "discipline": null,
            "ids": null
          },
          "99fac70d55c1bc97b0748c221563af348f87c342": {
            "bib_entry_raw": "Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, and Ian J. Goodfellow. 2018. Realistic evaluation of deep semi-supervised learning algorithms.",
            "contained_arXiv_ids": [
              {
                "id": "1804.09170",
                "text": "Realistic evaluation of deep semi-supervised learning algorithms.",
                "start": 89,
                "end": 154
              }
            ],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "4b1c521cd62705dc38bd084dc0587713b9b6d458": {
            "bib_entry_raw": "Sree Hari Krishnan Parthasarathi and Nikko Strom. 2019. Lessons from building acoustic models with a million hours of speech. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6670–6674. IEEE.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "1e2b68ee0b0825ec0bfbb1f0072097f7f02d9f80": {
            "bib_entry_raw": "P. J. Price. 1990. Evaluation of spoken language systems: the atis domain. In HLT.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "5784b04dd9b18b7e5a157a6ccd1dec31df8db12d": {
            "bib_entry_raw": "Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
            "contained_arXiv_ids": [],
            "contained_links": [
              {
                "url": "https://doi.org/10.18653/v1/p18-1096",
                "text": "Strong baselines for neural semi-supervised learning under domain shift.",
                "start": 41,
                "end": 113
              }
            ],
            "discipline": null,
            "ids": null
          },
          "ffa26c06c36fb86e50a9af61f06524b3aac58a04": {
            "bib_entry_raw": "Kai Wei, Rishabh Iyer, and Jeff Bilmes. 2015. Submodularity in data subset selection and active learning. In International Conference on Machine Learning, pages 1954–1963.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "78061be39236427e5c5bb4e974c01ba81ce49c0b": {
            "bib_entry_raw": "I Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. 2019. Billion-scale semi-supervised learning for image classification. arXiv preprint arXiv:1905.00546.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "943c6d5534f1df1ba12d04a712ad573474152d33": {
            "bib_entry_raw": "David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          },
          "ab0caeccc5148785b654d9797a05459664d1c7ff": {
            "bib_entry_raw": "Zhi-Hua Zhou and Ming Li. 2005. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactions on knowledge and Data Engineering, 17(11):1529–1541.",
            "contained_arXiv_ids": [],
            "contained_links": [],
            "discipline": null,
            "ids": null
          }
        },
        "inlined_texts": [
          {
            "section": "Introduction",
            "text": "Voice-assistants with speech and natural language understanding (NLU) are becoming increasingly prevalent in every day life. These systems, such as Google Now, Alexa, or Siri, are able to respond to queries pertaining multiple domains (e.g., music, weather). An NLU system commonly consists of an intent classifier (IC) and named entity recognizer (NER). It takes text input from an automatic speech recognizer and predicts intent and entities. For example, if a user asks “play lady gaga”, the IC classifies the query to intent of PlayMusic, and the NER classifies “lady gaga” as Artist. An important requirement for voice-assistants is the ability to continuously add support for new functionalities, i.e., new intents, or new entity types, while improving recognition accuracy for the existing ones. Having high quality labeled data is the key to achieve this goal. However, obtaining human annotation is an expensive and time-consuming process.\n"
          },
          {
            "section": "Introduction",
            "text": "Semi-Supervised Learning (SSL) provides a framework for utilizing large amount of unlabeled data when obtaining labels is expensive , , . SSL techniques have been shown to improve deep models performance across different machine learning tasks including text classification, machine translation, image classification , , , , , . A common practice to evaluate SSL algorithms is to take an existing labeled dataset and only use a small fraction of training data as labeled data, while treating the rest of the data as unlabeled dataset. Such evaluation, often constrained to the cases when labeled data is scarce, raises questions about the usefulness of different SSL algorithms in a real-world setting .\n"
          },
          {
            "section": "Introduction",
            "text": "In voice assistants, we face additional challenges while applying SSL techniques at scale including (1) how much unlabeled data should we use for SSL and how to select unlabeled data from a large pool of unlabeled data? (2) Most SSL benchmarks make the assumption that unlabeled datasets come from the same distribution as the labeled datasets. This assumption is often violated as, by design, the labeled training datasets also contain synthetic data, crowd-sourced data to represent anticipated usages of a functionality, and unlabeled data often contain a lot of out of domain data. (3) Unlike widely used NLU datasets such as SNIPS , ATIS , real-world voice assistant datasets are much larger and have a lot of redundancy because some queries such as turn on lights might be much more frequent than others. Due to such evaluation concerns, performance of different SSL techniques in real-world NLU applications is still in question.\n"
          },
          {
            "section": "Introduction",
            "text": "To address these issues, we study three data selection methods to select unlabeled data and evaluate how the selected data affect the performance of different SSL methods on a real-world NLU dataset. This paper provides three contributions: (1) Design of a production SSL pipeline which can be used to intelligently select unlabeled data to train SSL models (2) Experimental comparison of four SSL techniques including, Pseudo-Label, Knowledge Distillation, Cross-View Training, and Virtual Adversarial Training in a real-world voice assistant setting (3) Operational recommendations for NLP practitioners who would like to employ SSL in production setting.\n"
          },
          {
            "section": "Background",
            "text": "Semi-Supervised Learning techniques are capable of providing large improvements in model performance with little effort, which could play a crucial role in large scale systems in industry. In supervised learning, given a labeled dataset $\\mathcal {D}_l$  composed of input-label pairs $(x,y)$ , the goal is to learn a prediction model $f_{\\theta }(x)$ , with parameters $\\theta$ , that is able to predict the correct label $y^{\\prime }$  corresponding to a new unseen input instance $x^{\\prime }$ . SSL techniques aim to leverage an unlabeled dataset, $\\mathcal {D}_u$ , to create better performing models than those that could be obtained by only using $\\mathcal {D}_l$ .\n"
          },
          {
            "section": "Background",
            "text": "The two widely used SSL methods are: Pseudo-Label (PL), and Knowledge Distillation (KD). In PL, a teacher model trained on labeled data is used to produce pseudo-labels for the unlabeled data set. A student model trained on the union of the labeled and pseudo-labeled data sets, often outperforms the teacher model. , . On the other hand, KD SSL methods do not assign a particular label to an unlabeled instance, but instead consider the whole distribution over the label space , , . In KD, it is hypothesized that leveraging the probability distribution over all labels provides more information than assuming a definitive label belonging to one particular class .\n"
          },
          {
            "section": "Background",
            "text": "In addition to PL and KD, Virtual Adversarial Training (VAT) and Cross-View Training (CVT) have achieved state-of-the-art SSL performance on various tasks including text classification, named entity recognition, and dependency parsing , , , . In this paper, we conduct comprehensive experiments and analysis related to these commonly used SSL techniques, and discuss their pros and cons in the industry setting.\n"
          },
          {
            "section": "Background",
            "text": "Data selection for SSL has been explored for different tasks including image classification  , NER  , . Model confidence based data selection is a widely used technique for SSL data selection where unlabeled data is selected on the basis of a classifier's confidence. Due to the abundance of unlabeled data in production voice-assistants, model confidence based filtering leads to a very large data pool. To overcome this issue, we study different data selection algorithm which can further reduce the size of unlabeled data.\n"
          },
          {
            "section": "Methods",
            "text": "We are interested in studying two different questions relevant to the use of unlabeled data in production environments: 1) how to effectively select SSL data from a large pool of unlabeled data, and 2) how do SSL techniques perform in realistic scenarios?\nTo do so, we focus on the tasks of intent classification (IC) and named entity recognition (NER), two important components in NLU systems.\n"
          },
          {
            "section": "Methods",
            "text": "The model architecture we study is an LSTM-based multi-task model for IC and NER tasks, where we use 300-dimension fastText word embeddings , trained on a large voice assistant corpus.The text corpus contains data transcribed by an automatic speech recognition system. A shared 256-dimension Bi-LSTM encoder and two separate task-specific Bi-LSTM encoders (256-dimension) are applied to encode the sentences. A softmax layer and a conditional random field (CRF) layer are used to produce predictions for IC and NER, respectively.\n"
          },
          {
            "section": "Methods",
            "text": "Below we describe our implementation of the SSL techniques and the data selection methods studied.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "In the industry setting, we often encounter the situation where we have extremely large pool of unlabeled data, intractable to have SSL methods run on the entire dataset. Given this challenge, we propose a two stage data selection pipeline to create an unlabeled SSL pool, $\\mathcal {D}_u$ , of a practical size, from the much larger pool of available data.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "Data selection pipeline, shown in Figure REF , first uses a classifier's confidence score to filter domain specific unlabeled data from a very large pool of unlabeled data, which might contain data from multiple domains. For a production system, first stage filtering might result in millions of examples, so we further filter data using different selection algorithms to find an SSL data pool, which facilitates effective SSL training. While the first stage filtering tries to find domain specific examples from a large pool, the goal of the second stage filtering is to find a subset of data which could result in better performance in SSL training.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "For first stage filtering, we train a binary classifier on the labelled data, and use it to select the in-domain unlabelled data. In our experiments, switching between different binary classifiers (linear, CNN, LSTM, etc) does not significantly change the selected data. Consequently, in this study, we simply use a single-layer 256-dimension Bi-LSTM for the first stage of filtering. Based on our initial experiments, we use confidence score 0.5 as the threshold for data selectionWe tried confidence larger than 0.5 but found that a high confidence score degrades the performance. Our hypothesis is that a high confidence score leads to selecting data similar to labeled data hence a less diverse SSL pool.. For second stage filtering, we explore data selection using a committee of models and using submodular optimization. While this paper explores only two data selection methods, it's worth mentioning that any data selection algorithm can be used in the second stage filtering to further optimize the size of SSL pool.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "Selection by Submodular Optimization: Submodular data selection is used to select a diverse representative subset of samples from given dataset. This method has been applied in speech recognition , machine translation  and natural language understanding tasks . For SSL data selection, we use feature-based submodular selection , where submodular functions are given by weighted sums of non-decreasing concave functions applied to modular functions. For SSL data selection, we use 1-4 n-gram as features and logarithm as the concave function. We filter out any n-gram features which appear less than 30 times in $\\mathcal {D}_l \\cup \\mathcal {D}_u$ . The lazy greedy algorithm is used to optimize submodular functions. The algorithm starts with $\\mathcal {D}_l$  as the selected data and chooses the utterance from the candidate pool $\\mathcal {D}_u$  which provides maximum marginal gain.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "Selection by Committee: SSL techniques work well when the model is able to provide an accurate prediction on unlabeled data. However, when this is not the case, SSL can have a detrimental effect to the overall system, since the model could be creating SSL data that is annotated incorrectly. Ideally, we would like to have a way of detecting when this might be the case.\nTypically, for a given input $x$ , neural networks provide a point estimate that is interpreted as a probability distribution over labels. If the point $x$  is easy to learn, neural networks trained from different initial conditions will learn a similar probability distribution for $x$ . On the other hand, if $x$  is difficult to learn, their predictions are likely to disagree or converge to low confidence predictions. This phenomenon has been observed in several works addressing uncertainty estimation , . As a consequence, data points with high uncertainty are more likely to be incorrectly predicted than those with low uncertainty.\n"
          },
          {
            "section": "Data Selection Approaches",
            "text": "To detect data points on which the model is not reliable, we train a committee of $n$  teacher models (we use $n = 4$  in this paper), and compute the average entropy of the probability distribution for every data point. Specifically, let $P(y;x,\\theta _i)$  denote the probability of label $y$  for input $x$  according to the $i^{th}$  teacher, we compute the average entropy of the predicted label distribution of $x$  as: $H(x) = - \\frac{1}{n} \\sum _{y \\in \\mathcal {Y}} \\sum _{i=1}^n P(y;x,\\theta _i) \\log P(y;x,\\theta _i)$ .\nWe then identify an entropy threshold with an acceptable error rate for mis-annotations (e.g., $20\\%$ ) based on a held-out dataset. Any committee annotated data whose entropy level is higher than the identified threshold, is deemed “not trustworthy” and filtered out.\n"
          },
          {
            "section": "Semi-Supervised Learning Approaches",
            "text": "We explore the following four Semi-Supervised Learning techniques:\n"
          },
          {
            "section": "Semi-Supervised Learning Approaches",
            "text": "PL based self-training is a simple and straightforward method of SSL , . Using a labeled data set $\\mathcal {D}_l$ , we first train a “teacher” model, $f_\\theta$ . We then generate a dataset of pseudo-labeled data from $\\mathcal {D}_u$ , by assigning for each input instance $x_u$ , the label $\\hat{y}$ , predicted by the teacher. A new model, to which we refer as a “student”, is then trained on the union of both pseudo-labeled and labeled datasets.\n"
          },
          {
            "section": "Semi-Supervised Learning Approaches",
            "text": "In KD, for a given input, a teacher model produces a probability distribution over all possible labels. The predicted probability distribution is often referred to as “soft label”. The student model is then trained alternating between two objectives: minimizing the loss on the labeled data, defined respectively for different tasks, and minimizing the cross-entropy loss between the student and teacher predicted “soft label” on the unlabeled data  . The soft labels on intents are generated by the IC's softmax layer, while the soft labels on label sequences are generated per token, by running softmax on the logits for each token before the CRF layer.\n"
          },
          {
            "section": "Semi-Supervised Learning Approaches",
            "text": "VAT is an efficient SSL approach based on adversarial learning. It has been shown to be highly effective in both image  and text classification  tasks. Given an unlabeled instance, VAT generates a small perturbation that would lead to the largest shift on the label distribution predicted by the model. After getting the adversarial perturbation, the objective is to minimize the KL divergence between the label distribution on the original instance and the instance with perturbation.\n"
          },
          {
            "section": "Semi-Supervised Learning Approaches",
            "text": "CVT is another SSL approach proved to be efficient on text classification, sequence labeling and machine translation . Using an Bi-LSTM, CVT uses the the bi-directional output from current state as an auxiliary prediction, takes the single-directional output from current and neighboring LSTM neurons, and forces them to predict the same label as the auxiliary prediction.\n"
          },
          {
            "section": "Data Sets",
            "text": "The main motivation of our study is to evaluate different data selection and SSL techniques in a production scale setting where we have a large amount of unlabeled data. To understand impact of data selection, we create two benchmark datasets for our experiments. In both experiments, using the pipeline shown in Figure REF , we first select $M$  utterances from a very large pool of unlabeled data, and then apply intelligent data selection to further select $N$  unlabeled utterances.\n"
          },
          {
            "section": "Data Sets",
            "text": "Commercial Dataset: Our commercial dataset provides an experimental setup to compare SSL techniques where labeled training data and unlabeled data come from a similar distribution. We choose four representative domains (i.e., categories for which the user can make requests) from a commercially available voice-assistant system for English language. The four selected categories are 1) Communication: queries related to call, messages, 2) Music: queries related to playing music, 3) Notifications: queries related to alarms, timers, and 4) ToDos: queries related to task organization. For each domain, NLU task is to identify the intent (IC), and the entities (NER) in the utterance.\n"
          },
          {
            "section": "Data Sets",
            "text": "For each domain, our dataset contains 50k unique training, 50k unique testing utterances, and hundreds of millions of utterances of unlabeled data. Since, we do not know in advance to which domain each unlabeled utterance belongs, we first select 500K unlabeled utterance per domain to form their respective unlabeled data pool, using a domain classifier, as shown in Figure REF . The choice of 500K size is based on a series of KD based SSL experiments in Music domain, with the SSL data pool size varying from 50K to 1M. It is observed that increasing SSL pool size beyond 500k starts to reduce the performance gain from SSL (Table REF ). To evaluate the effect of intelligent data selection, out of 500k, we further select 300k utterances via different data selection approaches and use them as unlabeled data in SSL experiments.\n<table> Relative error rate reduction using KD, over baseline trained with only labeled data, for Music domain. Unlabeled data SSL pool size varies from 50K to 1M utterances. 50K labeled examples are used for all experiments. The metric for IC is classification error rate, and for NER is entity recognition F1 error rate."
          },
          {
            "section": "Data Sets",
            "text": "SNIPS Dataset: We also create a benchmark setup where labeled and unlabeled data come from different distributions. We use SNIPS  dataset as labeled data, and use unlabeled data from our commercial dataset as SSL pool data. Similar to our commercial dataset, we train a binary classifier for each intent on SNIPS and use it to select $300,000$  utterances as the unlabeled data pool for each intent. Then, we apply data selection approaches to filter for $20,000$  utterances per intent for SSL experiments.\n"
          },
          {
            "section": "Results",
            "text": "This section presents evaluations of different SSL techniques using different data selection regimes.\nFor all experiments, hyperparameters are optimized on development set. The SSL techniques evaluated are: PL, KD, VAT, CVT. The data selection methods evaluated are: random selection (Random), submodular optimization based selection (Submodular), and committee-based selection (Committee).\n"
          },
          {
            "section": "Results on Commercial Dataset",
            "text": "Due to confidentiality, we could not disclose absolute performance numbers on the commercial dataset. Only relative changes over baseline are reported. A summary of the results for the various data selection and SSL techniques is given in Table REF . “Baseline” refers to model trained with only labeled data. The metric for IC task is intent classification error rate. The metric for NER task is entity recognition F1 error rate. The table shows the relative error reduction compared to baseline. The bold font shows the best performing SSL method for each data selection approach.\n<table> Model performance by different SSL methods and data selection methods, for SNIPS data set. The metric for IC task is classification error rate, and for NER task is entity recognition F1 error rate."
          },
          {
            "section": "Results on Commercial Dataset",
            "text": "Comparison of Data Selection Methods: We observe that both Submodular and Committee based selection outperforms random selection across all domains and SSL techniques. This shows the effectiveness of Stage 2 data filtering. While on Notifications and ToDos domain, submodular selection performs better than other methods, on Communication and Music domain, committee based selection performs the best.\n"
          },
          {
            "section": "Results on Commercial Dataset",
            "text": "Comparison of SSL Techniques: Table REF  shows that KD improves performances over PL in virtually all scenarios (except for NER in ToDos). This supports the hypothesis that using the full distribution predicted by the teacher model, instead of using solely the predicted label, allows for the transfer of extra information when training a student model. In addition, though both VAT and CVT consistently outperform KD and PL, their benefits are task dependent. VAT shows stronger benefits on all NER experiments, while CVT performs better in most IC experiments. From an accuracy perspective, VAT is more beneficial in NER tasks while CVT is more beneficial in classification tasks.\n"
          },
          {
            "section": "Results on Commercial Dataset",
            "text": "SSL Techniques Computation Comparison: We time each SSL technique on the data selected for Music domain. While PL and KD took approximately 30 minutes to train each epoch on a Tesla V100 GPU, VAT and CVT took 62 minutes and 75 minutes, respectively. Given that PL and KD have similar compute requirement and KD consistently outperforms PL, KD should be preferred over PL for SSL. The decision between CVT and VAT relies on the trade-off between accuracy and cost.\n"
          },
          {
            "section": "Results on SNIPS Dataset",
            "text": "Test results on SNIPS dataset are summarized in Table REF . The test results on SNIPS aligns with our observations on commercial dataset: VAT and CVT are the superior SSL techniques. Moreover, the results show that VAT and CVT provide good generalization even when the labeled and unlabeled data are from different sources and of different distributions. In contrast to the commercial dataset where intelligent data selection leads to better performance, on SNIPS dataset, we found that submodular optimization or committee based selection do not provide any gain over random selection. It's not surprising given that SNIPS labeled data distribution is very different from the unlabeled SSL data which makes data selection algorithm susceptible to noisy unlabeled data selection. For example, submodular optimization primarily optimizes for data diversity which makes it more likely to select diverse unrelated examples than random selection.\n"
          },
          {
            "section": "Recommendations",
            "text": "Based on our empirical results, we make the following recommendations for industry scale NLU SSL systems.\n"
          },
          {
            "section": "Recommendations",
            "text": "Prefer VAT and CVT SSL techniques over PL and KL: When selecting SSL techniques, CVT usually performs better for classification task while VAT is preferable for NER task. In general, we would recommend VAT since its performance in classification task is comparable to CVT and also because VAT excels in NER task which is usually harder to achieve performance gain.\n"
          },
          {
            "section": "Recommendations",
            "text": "Use data selection to select a subset of unlabeled data: For industry setting where the volume of unlabeled data is impractically large, we introduce a data filtering pipeline to first reduce the size of unlabeled data pool to a manageable size. Our experiments show that both submodular as well as committee based data selection could further improve SSL performance. We recommend Submodular Optimization based data selection in light of its lower cost and similar performance to committee based method.\n"
          },
          {
            "section": "Recommendations",
            "text": "From experiments on SNIPS data sets, we observe that further data selection does not bring extra improvement comparing to random selection. Optimizing data selection, when unlabeled data pool is of a drastically different distribution from the labeled data, remains a challenge and could benefit from further research.\n"
          },
          {
            "section": "Conclusion",
            "text": "In this paper, we conduct extensive experiments and in-depth analysis of different SSL techniques applied to industry scale NLU tasks. Industrial settings come with some unique challenges such as massive unlabeled data with a mixture of in domain and out of domain data. In order to overcome these challenges, we also investigate different data selection approaches including submodular optimization and committee based filtering.\n"
          },
          {
            "section": "Conclusion",
            "text": "Our paper provides insights on how to build an efficient and accurate NLU system, utilizing SSL, from different perspectives (e.g. model accuracy, amount of data, training time and cost, etc). By sharing these insights with larger NLP community, we hope that these guideline will be useful for researchers and practitioner who aim to improve NLU systems while minimizing human annotation effort.\n"
          }
        ]
      }
    }
Downloads last month
1