{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:32.131741Z" }, "title": "It is better to Verify: Semi-Supervised Learning with a human in the loop for large-scale NLU models", "authors": [ { "first": "Verena", "middle": [], "last": "Weber", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": { "settlement": "Berlin", "country": "Germany" } }, "email": "" }, { "first": "Enrico", "middle": [], "last": "Piovano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": { "settlement": "Berlin", "country": "Germany" } }, "email": "piovano@amazon.com" }, { "first": "Melanie", "middle": [], "last": "Bradford", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon Alexa AI", "location": { "settlement": "Berlin", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "When a NLU model is updated, new utterances must be annotated to be included for training. However, manual annotation is very costly. We evaluate a semi-supervised learning workflow with a human in the loop in a production environment. The previous NLU model predicts the annotation of the new utterances, a human then reviews the predicted annotation. Only when the NLU prediction is assessed as incorrect the utterance is sent for human annotation. Experimental results show that the proposed workflow boosts the performance of the NLU model while significantly reducing the annotation volume. Specifically, in our setup, we see improvements of up to 14.16% for a recall-based metric and up to 9.57% for a F1score based metric, while reducing the annotation volume by 97% and overall cost by 60% for each iteration.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "When a NLU model is updated, new utterances must be annotated to be included for training. However, manual annotation is very costly. We evaluate a semi-supervised learning workflow with a human in the loop in a production environment. The previous NLU model predicts the annotation of the new utterances, a human then reviews the predicted annotation. Only when the NLU prediction is assessed as incorrect the utterance is sent for human annotation. Experimental results show that the proposed workflow boosts the performance of the NLU model while significantly reducing the annotation volume. Specifically, in our setup, we see improvements of up to 14.16% for a recall-based metric and up to 9.57% for a F1score based metric, while reducing the annotation volume by 97% and overall cost by 60% for each iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural Language Understanding (NLU) models are a key component of task-oriented dialog systems such as as Amazon Alexa or Google Assistant which have gained more popularity in recent years. To improve their performance and extend their functionalities, new versions of the NLU model are released to customers on a regular basis. In the classical supervised learning approach, new training data between model updates is acquired by sampling utterances from live traffic and have them annotated by humans. The main drawback is the high cost of manual annotation. We refer to this conventional workflow as human annotation workflow. In this paper, we propose a new workflow with the aim to reduce the annotation cost while still maintaining high quality NLU models. We refer to it as the human verification workflow. The proposed workflow uses the previous (current) version of the NLU model to annotate the new training data before each model update. The predicted annotation produced by the NLU model, which we refer to as NLU hypothesis or interpretation, is then reviewed by humans. If the NLU hypothesis is assessed as correct, the NLU hypothesis is used as the ground-truth annotation of the utterance during training. If the NLU hypothesis is assessed as incorrect, the utterance is sent for human annotation before being ingested for training. With the proposed workflow, only utterances for which the hypothesis of the NLU model was assessed as incorrect are annotated by humans, thereby reducing the annotation volume drastically. Since verifying is faster and cheaper than annotating, a cost reduction is achieved. We investigate the adoption of this workflow once the system has reached a certain maturity, not from the start. While these two workflows would provide the same annotation for any utterance in an ideal world, the results may differ in the real world depending on the presence of annotation or verification errors. In this paper, we would like to answer the following fundamental question: in terms of human annotation errors, human verification errors and model performance, is it better to manually verify or annotate in order to iteratively update NLU systems?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To answer this question, we investigate the impact of human annotation vs. verification in a large scale NLU system. To this end, we consider two model architectures utilized for NLU models in the current production systems, a Conditional Random Field (CRF) (Lafferty et al., 2001; Okazaki, 2007) for slot filling and a Maximum Entropy (MaxEnt) classifier (Berger et al., 1996) for intent classification as well as a transformer based BERT architecture (Devlin et al., 2018) . We evaluate the proposed workflow both explicitly by measuring annotation quality as well as implicitly by comparing the resulting model performance. Our experimental results show that the human verification workflow boosts the model performance while reducing human annotation volumes. In addition, we show that human annotation resources are better spent on utterances selected through Active Learning (Cohn et al., 1996; Settles, 2009; Konyushkova et al., 2017) .", "cite_spans": [ { "start": 258, "end": 281, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF5" }, { "start": 282, "end": 296, "text": "Okazaki, 2007)", "ref_id": "BIBREF9" }, { "start": 356, "end": 377, "text": "(Berger et al., 1996)", "ref_id": "BIBREF0" }, { "start": 453, "end": 474, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" }, { "start": 872, "end": 900, "text": "Learning (Cohn et al., 1996;", "ref_id": null }, { "start": 901, "end": 915, "text": "Settles, 2009;", "ref_id": "BIBREF12" }, { "start": 916, "end": 941, "text": "Konyushkova et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using a model to label data instead of humans is an approach that has been studied extensively since human labelling is costly while unlabelled data can be acquired easily. Under the term Semisupervised learning (SSL) (Zhou and Belkin, 2014; Zhu, 2005) many different approaches to leverage unlabelled data emerged in the literature. SSL aims at exploiting unlabelled data based on a small set of labelled data. One approach is self-training, also referred to as self-teaching or bootstrapping (Zhu, 2005; Triguero et al., 2015) . In self-training labels are generated by feeding the unlabelled data in a model trained on the the available labelled data. Typically, the predicted labels for instances with high confidence are then used to retrain the model and the procedure is repeated. For neural networks, Lee (2013) suggested pseudo-labelling which optimizes a combination of supervised and unsupervised loss instead of retraining the model on pseudo-labels. Self-training has been applied to several natural language processing tasks. To name only a few examples, Yarowsky (1995) uses selftraining for word sense disambiguation, Riloff et al. (2003) to identify subjective nouns. In McClosky et al. (2006) self learning is used for parsing. The two main drawbacks of self-training are that instances with low confidence scores cannot be labelled and that prediction errors with high confidence can reinforce itself. To mitigate the latter issue strategies to identify mis-labeled instances have been discussed. An exhaustive review is beyond the scope of this paper, we just name a few examples. Li and Zhou (2005) use local information in a neighborhood graph to identify unreliable labels, Shi et al. (2018) add a distance based uncertainty weight for each sample and propose Min-Max features for better between-class separability and within-class compactness. In this paper we suggest to use human verification to ensure the ingested predicted labels are reliable. In addition, we rely on human annotation for those utterances that the model cannot interpret correctly. The goal is to mitigate the two afore-mentioned problems of self-training.", "cite_spans": [ { "start": 218, "end": 241, "text": "(Zhou and Belkin, 2014;", "ref_id": "BIBREF20" }, { "start": 242, "end": 252, "text": "Zhu, 2005)", "ref_id": "BIBREF21" }, { "start": 494, "end": 505, "text": "(Zhu, 2005;", "ref_id": "BIBREF21" }, { "start": 506, "end": 528, "text": "Triguero et al., 2015)", "ref_id": "BIBREF15" }, { "start": 1069, "end": 1084, "text": "Yarowsky (1995)", "ref_id": "BIBREF17" }, { "start": 1134, "end": 1154, "text": "Riloff et al. (2003)", "ref_id": "BIBREF10" }, { "start": 1188, "end": 1210, "text": "McClosky et al. (2006)", "ref_id": "BIBREF8" }, { "start": 1601, "end": 1619, "text": "Li and Zhou (2005)", "ref_id": "BIBREF7" }, { "start": 1697, "end": 1714, "text": "Shi et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A so called human-in-the-loop approach has been investigated for different applications. Zhang et al. (2020) investigate a human-in-the-loop approach for image segmentation and annotation. Schulz et al. (2019) examine the use of suggestion models to support human experts with seg-mentation and classification of epistemic activities in diagnostic reasoning texts. Zhang and Chaudhuri (2015) suggest active learning from weak and strong labelers where these labelers can be humans with different levels of expertise in the labelling task. Shivaswamy and Joachims (2015) show that a human expert is not always needed but that user behavior is valuable feedback that can be collected more easily.", "cite_spans": [ { "start": 89, "end": 108, "text": "Zhang et al. (2020)", "ref_id": "BIBREF19" }, { "start": 189, "end": 209, "text": "Schulz et al. (2019)", "ref_id": "BIBREF11" }, { "start": 365, "end": 391, "text": "Zhang and Chaudhuri (2015)", "ref_id": "BIBREF18" }, { "start": 539, "end": 569, "text": "Shivaswamy and Joachims (2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The contribution of this paper is two-fold: First, we propose a SSL approach with a human in the loop for large-scale NLU models. Second, we show this workflow boosts the performance in a production system while reducing human annotation significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Active Learning (AL) (Cohn et al., 1996; Settles, 2009; Konyushkova et al., 2017) proposes to label those instances that promise the highest learning effect for the model instead of blindly labelling data. Since the proposed workflow reduces the human annotation volume, we spend some of these freed up resources on annotation of AL data.", "cite_spans": [ { "start": 21, "end": 40, "text": "(Cohn et al., 1996;", "ref_id": "BIBREF1" }, { "start": 41, "end": 55, "text": "Settles, 2009;", "ref_id": "BIBREF12" }, { "start": 56, "end": 81, "text": "Konyushkova et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we briefly discuss the NLU model, the used metrics, the concept of iterative model updates and evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup and Approach", "sec_num": "3" }, { "text": "A common approach to NLU is dividing the recognition task into two subtasks. Predicting the intent and the slots of a user's utterance constitutes a way to map the utterance on a semantic space. Accordingly, our NLU model consists of two models, each performing one of these subtasks. Intent classification (IC) predicts the user's specific intent, e.g. play music or turn on a light. Slot filling (SF), finally extracts the semantic constituents from the utterance. Taking the example \"Where is MCO?\" from the ATIS data (Tur et al., 2010) (Do and Gaspers, 2019), should be labelled as", "cite_spans": [ { "start": 521, "end": 539, "text": "(Tur et al., 2010)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "NLU task", "sec_num": "3.1" }, { "text": "where\u2212[O] is\u2212[O] M CO\u2212[B \u2212airport_code]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLU task", "sec_num": "3.1" }, { "text": "by slot filling. The intent should be recognized as city. When an utterance is humanly annotated for training, the annotator performs the same operation of the NLU model by mapping the utterance to a specific intent and slots in order to be ingested for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NLU task", "sec_num": "3.1" }, { "text": "We report results considering two metrics utilized to evaluate the performance of NLU models in production systems, Semantic Error Rate (SemER) and Intent Classification Error rate (ICER). SemER takes into consideration both intent and slot classification errors, while ICER only takes intent errors into consideration. SemER is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "3.2" }, { "text": "SemER = #(slot + intent errors) #(slots + intents in ref erence)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "3.2" }, { "text": "(1) ICER simply is the percentage of utterances with mis-classified intent, only intent classification counts while slot errors are ignored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ICER = #(intent errors) #(total utterances)", "eq_num": "(2)" } ], "section": "Metrics", "sec_num": "3.2" }, { "text": "Note that both SemER and ICER are error metrics, i.e. a metric reduction reflects an improvement. Both are one-sided metrics that do not take precision into account. Therefore, we also report F-metrics for SemER and ICER, which are referred to as F-SemER and F-ICER, respectively. They are defined as the harmonic mean of the recall-based metric and the precision. We report macro-averages over intents for all metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "3.2" }, { "text": "NLU models need to be regularly updated to improve their capability to understand new customer requests and extend the functionalities of the virtual assistant. Therefore new models trained on recent customer data are released on a regular basis. New data is sampled from live traffic between two NLU model releases and annotated. A part of the legacy training data is then discarded and replaced by the new annotated data for two reasons: 1) practical constraints to the building time of the new release model, 2) using too old and therefore unrepresentative data could degrade model performance. As a consequence, each NLU model is trained on an almost constant number of training utterances. For example, assuming that the overall training size is constrained to 400.000 utterances, then, if in a new release 10.000 new utterances are added, the oldest 10.000 will be removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Model Updates", "sec_num": "3.3" }, { "text": "When NLU models are released for the first time, only human annotated data are used for training as previous versions of the NLU model are not available. This means in theory, the two workflows can be implemented from the second release onward. This implies that during the first few releases the majority of the training data is human annotated data. However, due to the data elimination procedure described in Section 3.3, after a certain number of releases with the verification workflow the manually annotated data from the first release will be fully removed from the training set. Here we assume to be in that maturity stage, where the full training dataset is derived from either the verification or annotation workflow, and hence no mixed training set between the two workflows is considered. For evaluation of the proposed workflows, we simulate the described updates and consider a specific model update for evaluation. A schematic timeline is shown in Figure 1 . As we are considering a mature NLU model, this evaluation is representative of other model updates. ", "cite_spans": [], "ref_spans": [ { "start": 963, "end": 971, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Maturity and workflow evaluation", "sec_num": "3.4" }, { "text": "This section describes the two workflows in detail. Throughout this paper we denote the human annotation workflow as the benchmark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detailed Workflow Description", "sec_num": "4.1" }, { "text": "In each model update, the new training utterances are sent for manual annotation. Hence, the whole training dataset on which the NLU model is retrained (or-fine-tuned) periodically is human annotated, including the recently added utterances. The annotator only has access to the annotation guideline, but cannot see any kind of hypothesized annotation of the utterance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human annotation workflowbenchmark:", "sec_num": "1." }, { "text": "Before each model update, the new training instances are first fed into the previous NLU model. The NLU hypothesis is then sent for human verification to assess if the NLU hypothesis is correct or not. If the annotation is evaluated as correct, the NLU hypothesis is ingested as ground-truth in the new NLU model training dataset. If the annotation is evaluated as incorrect, the utterance is sent for human annotation before being ingested. In this workflow, the evaluator has access to both the annotation guideline as well as the NLU annotation hypothesis of the utterance. Figure 2 depicts the proposed workflow. The training dataset on which the NLU model is retrained (or fine-tuned) only partially consists of human-annotated data.", "cite_spans": [], "ref_spans": [ { "start": 577, "end": 585, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Human verification workflowproposed:", "sec_num": "2." }, { "text": "With the proposed workflow, the cost is dramatically reduced as verifying is faster and cheaper than annotating. However, the question is if the verification workflow is also favorable in terms of data quality and model performance. In our experiments we therefore evaluate which of the two workflows is able to generate higher quality training data and enhance the NLU model performance. Results are discussed in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human verification workflowproposed:", "sec_num": "2." }, { "text": "For training, we start with a dataset of unlabelled utterances representative of the user engagement with a dialog system. The dataset spans over a large number of intent and slots representative of multiple functionalities. High level statistics are listed in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5" }, { "text": "In order to have the same annotation and verification quality as in the production system, we requested the support from professional annotators. Trained and experienced annotators mimicked both workflows. For each utterance, one annotator of the team followed the human annotation workflow, while another followed the human verification workflow. For each training utterance, we also have the corresponding NLU hypothesis from the production model when the utterance was sampled. As a result two labelled training datasets were generated from one unlabelled dataset following each workflow. The overall training dataset has been built over multiple NLU releases as explained in Section 3.3. The two training sets are then used to re-train or fine-tune each of the considered architecture. As a test set, we also consider a dataset of utterances representative of the engagement of the users with a voice assistant (see Table 5 ), also sampled as explained in Section 3.3. In order to have a correct and unbiased test set, test data are annotated following a different pipeline than the ones for training. For each test utterances three annotators need to produce the same annotation (100% agreement). This allows us to assume that the annotation of the test data is almost surely correct. The updated models are then evaluated on the test set to compare performance.", "cite_spans": [], "ref_spans": [ { "start": 920, "end": 927, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5" }, { "text": "This section describes the conducted experiments to evaluate both workflows and provides more details about how we selected utterances for annotation through AL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "To evaluate the proposed verification workflow, we consider two NLU architectures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "\u2022 CRF+MaxEnt classifier architecture:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "We use a Conditional Random Field (CRF) (Lafferty et al., 2001; Okazaki, 2007) for slot filling and a Maximum Entropy (MaxEnt) classifier (Berger et al., 1996) for intent classification. The new NLU model is obtained by re-training from scratch on the updated training dataset.", "cite_spans": [ { "start": 40, "end": 63, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF5" }, { "start": 64, "end": 78, "text": "Okazaki, 2007)", "ref_id": "BIBREF9" }, { "start": 138, "end": 159, "text": "(Berger et al., 1996)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "\u2022 BERT architecture:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "We use a transformer based BERT model (Devlin et al., 2018 ) that jointly solves the task of intent classification and NER. Hidden states are fed into a softmax layer to solve the two tasks. We use pre-trained mono-lingual BERT for German trained on unsupervised data from # utterances # distinct intents # distinct slots training set 400 000 316 282 test set 100 000 316 282 Wikipedia pages. We tokenize the input sentence, feed it to BERT, get the last layer's activations, and pass them through a final layer to make intent and NER predictions. In this case the updated NLU model is obtained by fine-tuning the initial NLU model on the new training dataset.", "cite_spans": [ { "start": 38, "end": 58, "text": "(Devlin et al., 2018", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "For both approaches we keep the set of features, hyperparameters and configuration constant for our experiments. All experiments are conducted for German. For each architecture, the models are trained by using the annotated data from the annotation vs verification workflows, respectively. For the BERT models, this step is preceeded by pretraining both models on unsupervised Wikipedia data. We then compare the performance of the resulting models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considered Model Architectures", "sec_num": "6.1" }, { "text": "We perform AL in two steps considering a corpus of millions of unlabelled utterances initially:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "6.2" }, { "text": "1. For each domain, select through a binary classifier which utterances from the unsupervised corpus are relevant to the domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "6.2" }, { "text": "2. Out of the candidate pool, select those with the lowest confidence score product of MaxEnt classifier (IC) and CRF (NER) and send them for annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "6.2" }, { "text": "Note that a low product of IC and NER score indicates that the utterance is difficult to label for the model. We selected a total of 30.000 utterances through AL for human annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Learning", "sec_num": "6.2" }, { "text": "This section discusses all obtained results. We first evaluate the annotation quality for both workflows and quantify the possible cost reduction for the proposed workflow, see Sections 7.1 and 7.2. Second, we compare the performance of the NLU models when trained on data labeled through the respective workflow. Results are shown in Section 7.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "To investigate by how much human annotation could be reduced through the proposed workflow, we calculate the percentage of utterances for which the NLU hypothesis of the previous model was assessed as correct between each update. We find that 97% of the annotation from the NLU model are assessed as correct. This means that only 3% of the utterances would be manually annotated constituting a significant reduction in annotation volume.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "Annotating an utterance takes about 2.5 the time of verification. Note that time is proportional to cost as we assume that human annotation specialists are paid a certain wage per hour and are able to process a certain amount of utterances depending on the task, annotation vs. verification. Let N denote the number of sampled utterances, t A the annotation time per utterance and t V the verification time per utterance in minutes. Then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "t A = 2.5 \u2022 t V or t V = 0.4 \u2022 t A .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "The total cost for the verification workflow can then be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "total V = N \u2022 t V + 0.03 \u2022 N \u2022 t A", "eq_num": "(3)" } ], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "Substituting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "t V = 0.4 \u2022 t A into 3 gives total V = 0.43 \u2022 N \u2022 t A .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "Note that N \u2022 t A denotes the total cost of the annotation workflow total A , so", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "total V = 0.43 \u2022 total A .", "eq_num": "(4)" } ], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "Thus the verification workflow leads to an overall cost reduction of almost 60 %.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation reduction with the proposed workflow", "sec_num": "7.1" }, { "text": "To compare the frequency of human errors in annotation and verification workflow, we requested an assessment by specialized annotators for the annotations from each workflow for one sample of utterances. For each utterance, three specialists had to agree in their assessment. Note that we took a sample of utterances assessed as correct in the human verification workflow as we wanted to estimate the percentage of incorrect training data that might be ingested through the verification workflow. Table 3 shows the human errors in the verification workflow relative to the annotation workflow.", "cite_spans": [], "ref_spans": [ { "start": 497, "end": 504, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Quality of evaluation vs annotation", "sec_num": "7.2" }, { "text": "SemER F-ICER F-SemER Annotation Reduction 1 MaxEnt+CRF -3.85% -2.62% -6.81% -3.45% -97% 2 MaxEnt+CRF+AL -24.58% -20.44% -26.04% -17.26% -90% 3 BERT -14.16% -8.77% -9.47% -1.80% -97% An annotation or verification is treated as incorrect if the intent or at least one of the slots is incorrect. We can see the verification workflow reduces overall human errors by 66% compared to the annotation workflow. Note that this large human error reduction is mostly driven by fewer intent errors, which are reduced by 80% for the verification workflow relative to the annotation workflow. Overall, the frequency of verification human errors is significantly lower than the frequency of annotation human errors. This means that looking at an already annotated utterance helps to reduce the number of low-quality training data compared to annotating an utterance from scratch, where the person has no indication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ICER", "sec_num": null }, { "text": "To evaluate the annotation consistency in each training dataset generated through the respective workflow, we calculate the average entropy across each dataset on token level in Table 4 . Entropy will be lower the fewer interpretations we see for the same token and the more consistent the annotation is. The entropy of the training set from the verification workflow is 5% lower than for the annotation workflow.", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 185, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "ICER", "sec_num": null }, { "text": "-80% Slot Errors -50% Overall Errors -66% Avg. entropy annotation workflow 0.5677 verification workflow 0.5378 relative -5.3% Table 4 : Average entropy on token level for each training dataset generated through the respective workflow.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 133, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Rel. human error Intent Errors", "sec_num": null }, { "text": "Table 2 displays all the experimental results to measure the impact of the verification workflow vs an-notation workflow on model performance. Specifically, we show the relative percentage change of the metric values considering the verification workflow relative to the metric values considering the annotation workflow as a baseline. As SemER and ICER are error based metrics, a \"-\" means an improvement of the performance for verification compared to annotation, while \"+\" means a degradation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "7.3" }, { "text": "It is evident that the verification workflow outperforms the annotation workflow, often even by a substantial margin, for all experiments and metrics while drastically reducing manual annotation volume for each iteration. This is in line with the previous observation of a lower error rate and higher consistency in the training data from the verification annotation workflow, see Section 7.2. Moreover, the gain in terms of ICER is higher than SemER for all experiments, which is driven by the greater reduction of intent errors in the verification workflow. We assume the display of the NLU hypothesis influences verifiers and results in a more consistent annotation when it comes to ambiguous utterances that have multiple valid interpretations. This again leads to more consistency in the training data by reducing the number of utterances for which the model sees two different annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "7.3" }, { "text": "The gains for BERT are larger than for Max-Ent+CRF, except for F-SemER. This suggests that BERT is more sensitive to contradictory training data which is why the proposed workflow yields even higher performance gains compared to the MaxEnt+CRF architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "7.3" }, { "text": "Given the high reduction in annotation volume through the proposed workflow, we used some of the freed up capacities instead to have AL data annotated. We added an additional 30.000 AL utterances for the most confused intents and slots to the training dataset of each workflow. As shown in Table 2 , adding comparatively few AL data boosts model performance of the verification vs annotation models by more than 20% for almost all metrics while increasing the annotation volume by less than 10%. The great relative difference in performance for verification vs annotation suggests that AL is even more beneficial for the verification workflow.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment Results", "sec_num": "7.3" }, { "text": "With the aim of reducing annotation costs, we test a methodology where mature NLU models are iteratively updated by ingesting labelled data via a human verification instead of a human annotation workflow. Our findings show that the proposed verification workflow not only cuts annotation costs by almost 60 %, but it also boosts the performance of the NLU system for both considered architectures. This is in line with the annotation quality evaluation we performed, where we found that the human error rate for verification is lower than the human error rate for annotation yielding more consistent training data in the former. Our findings have an important practical implication: verifying is better than annotating for mature systems. Moreover, a fraction of the annotation savings should be utilized to annotate more impactful data, for instance AL data, which generated a large performance gain in the proposed workflow with a minimal increase in annotation volume.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" } ], "back_matter": [ { "text": "The authors would like to thank the Alexa DeepNLU team for providing the pre-trained BERT model for German language. The authors would also like to thank Tobias Falke for his valuable comments on an earlier draft of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Comput. Linguist", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Comput. Lin- guist., 22(1):39-71.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Active learning with statistical models", "authors": [ { "first": "Zoubin", "middle": [], "last": "David A Cohn", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 1996, "venue": "Journal of artificial intelligence research", "volume": "4", "issue": "", "pages": "129--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A Cohn, Zoubin Ghahramani, and Michael I Jor- dan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129- 145.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Crosslingual transfer learning for spoken language understanding", "authors": [ { "first": "Thi", "middle": [], "last": "Quynh Ngoc", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Do", "suffix": "" }, { "first": "", "middle": [], "last": "Gaspers", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quynh Ngoc Thi Do and Judith Gaspers. 2019. Cross- lingual transfer learning for spoken language under- standing. Proceedings of the 2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing, ICASSP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning active learning from data", "authors": [ { "first": "Ksenia", "middle": [], "last": "Konyushkova", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Sznitman", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Fua", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4225--4235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. 2017. Learning active learning from data. In Advances in Neural Information Processing Systems, pages 4225-4235.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "authors": [ { "first": "Dong-Hyun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "Workshop on challenges in representation learning, ICML", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in rep- resentation learning, ICML, volume 3.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Setred: Selftraining with editing", "authors": [ { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhi-Hua", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2005, "venue": "Pacific-Asia Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "611--621", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming Li and Zhi-Hua Zhou. 2005. Setred: Self- training with editing. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 611- 621. Springer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Effective self-training for parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "152--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Pro- ceedings of the Human Language Technology Con- ference of the NAACL, Main Conference, pages 152- 159.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Crfsuite: a fast implementation of conditional random fields (crfs", "authors": [ { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). URL http://www. chokkan. org/software/crfsuite.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning subjective nouns using extraction pattern bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the seventh confer- ence on Natural language learning at HLT-NAACL 2003, pages 25-32.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analysis of automatic annotation suggestions for hard discourse-level tasks in expert domains", "authors": [ { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "M", "middle": [], "last": "Christian", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kiesewetter", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Sailer", "suffix": "" }, { "first": "", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "R", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02564" ] }, "num": null, "urls": [], "raw_text": "Claudia Schulz, Christian M Meyer, Jan Kiesewetter, Michael Sailer, Elisabeth Bauer, Martin R Fischer, Frank Fischer, and Iryna Gurevych. 2019. Anal- ysis of automatic annotation suggestions for hard discourse-level tasks in expert domains. arXiv preprint arXiv:1906.02564.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Active learning literature survey", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Transductive semi-supervised deep learning using min-max features", "authors": [ { "first": "Weiwei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Yihong", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ding", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Shi, Yihong Gong, Chris Ding, Zhiheng MaX- iaoyu Tao, and Nanning Zheng. 2018. Transductive semi-supervised deep learning using min-max fea- tures. In Proceedings of the European Conference on Computer Vision (ECCV).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Coactive learning", "authors": [ { "first": "Pannaga", "middle": [], "last": "Shivaswamy", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2015, "venue": "Journal of Artificial Intelligence Research", "volume": "53", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pannaga Shivaswamy and Thorsten Joachims. 2015. Coactive learning. Journal of Artificial Intelligence Research, 53:1-40.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study", "authors": [ { "first": "Isaac", "middle": [], "last": "Triguero", "suffix": "" }, { "first": "Salvador", "middle": [], "last": "Garc\u00eda", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Herrera", "suffix": "" } ], "year": 2015, "venue": "Knowledge and Information systems", "volume": "42", "issue": "2", "pages": "245--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Triguero, Salvador Garc\u00eda, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems, 42(2):245- 284.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "What is left to be understood in atis?", "authors": [ { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2010, "venue": "2010 IEEE Spoken Language Technology Workshop", "volume": "", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gokhan Tur, Dilek Hakkani-T\u00fcr, and Larry Heck. 2010. What is left to be understood in atis? In 2010 IEEE Spoken Language Technology Workshop, pages 19- 24. IEEE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "33rd annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Active learning from weak and strong labelers", "authors": [ { "first": "Chicheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kamalika", "middle": [], "last": "Chaudhuri", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.02847" ] }, "num": null, "urls": [], "raw_text": "Chicheng Zhang and Kamalika Chaudhuri. 2015. Ac- tive learning from weak and strong labelers. arXiv preprint arXiv:1510.02847.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Human-in-the-loop image segmentation and annotation", "authors": [ { "first": "Xiaoya", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lianjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2020, "venue": "Science China Information Sciences", "volume": "63", "issue": "11", "pages": "1--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoya Zhang, Lianjie Wang, Jin Xie, and Pengfei Zhu. 2020. Human-in-the-loop image segmentation and annotation. Science China Information Sciences, 63(11):1-3.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Chapter 22 -semi-supervised learning", "authors": [ { "first": "Xueyuan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" } ], "year": 2014, "venue": "Academic Press Library in Signal Processing", "volume": "1", "issue": "", "pages": "1239--1269", "other_ids": { "DOI": [ "10.1016/B978-0-12-396502-8.00022-X" ] }, "num": null, "urls": [], "raw_text": "Xueyuan Zhou and Mikhail Belkin. 2014. Chapter 22 - semi-supervised learning. In Paulo S.R. Diniz, Jo- han A.K. Suykens, Rama Chellappa, and Sergios Theodoridis, editors, Academic Press Library in Sig- nal Processing: Volume 1, volume 1 of Academic Press Library in Signal Processing, pages 1239 - 1269. Elsevier.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semi-supervised learning literature survey", "authors": [ { "first": "Jerry", "middle": [], "last": "Xiaojin", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Jerry Zhu. 2005. Semi-supervised learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sci- ences.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Schematic depiction of the NLU model updates timeline. Each dash represents a release. Results are reported for Evaluation point.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Schematic depiction of the proposed verification workflow. Note that the NLU model is updated periodically.", "type_str": "figure", "uris": null }, "TABREF0": { "html": null, "type_str": "table", "num": null, "content": "", "text": "High level statistics for training and test set." }, "TABREF1": { "html": null, "type_str": "table", "num": null, "content": "
", "text": "Rel. difference in error metrics for verification vs annotation (baseline) workflow for all experiments." }, "TABREF2": { "html": null, "type_str": "table", "num": null, "content": "
", "text": "Human error frequencies for verfication vs annotation on a sample of utterances." } } } }