ACL-OCL / Base_JSON /prefixM /json /mrl /2021.mrl-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
19.5 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:14:05.161728Z"
},
"title": "Sequence Mixup for Zero-Shot Cross-Lingual Part-Of-Speech Tagging",
"authors": [
{
"first": "Megh",
"middle": [],
"last": "Thakkar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"country": "Pilani"
}
},
"email": ""
},
{
"first": "Vishwa",
"middle": [],
"last": "Shah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"country": "Pilani"
}
},
"email": ""
},
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"country": "Pilani"
}
},
"email": "ramitsawhney@sharechat.co"
},
{
"first": "Debdoot",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"country": "Pilani"
}
},
"email": "debdoot.iitd@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "There have been efforts in cross-lingual transfer learning for various tasks. We present an approach utilizing an interpolative data augmentation method, Mixup, to improve the generalizability of models for part-of-speech tagging trained on a source language, improving its performance on unseen target languages. Through experiments on ten languages with diverse structures and language roots, we put forward its applicability for downstream zeroshot cross-lingual tasks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "There have been efforts in cross-lingual transfer learning for various tasks. We present an approach utilizing an interpolative data augmentation method, Mixup, to improve the generalizability of models for part-of-speech tagging trained on a source language, improving its performance on unseen target languages. Through experiments on ten languages with diverse structures and language roots, we put forward its applicability for downstream zeroshot cross-lingual tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, neural network models have obtained state-of-the-art results in part-of-speech (POS) tagging tasks across multiple languages. Since numerous languages lack suitable corpora annotated with POS labels, there have been efforts to design models for cross-lingual transfer learning. Crosslingual learning enables us to utilize the annotated corpora of a source language to train models that are effective over a different target language. Interpolative data augmentation methods have been proposed to mitigate overfitting in models in the absence of enough training data. Sequence-based mixup (Chen et al., 2020) is an interpolative data augmentation method for named entity recognition. However, these methods have not been explored for cross-lingual transferability.",
"cite_spans": [
{
"start": 598,
"end": 617,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpolative data augmentation methods are aimed at increasing the diversity of the training distribution and, as a result, improving the generalizability of underlying models. We leverage this capability of sequence Mixup (Seq. Mixup) to capture rich linguistic information for cross-lingual transferability of POS tagging tasks for ten languages with different structures and language roots. To this end, we first measure the dataset level cosine similarity across languages, defined as the average of sentence-level embedding of dataset samples. This gives an overview of the syntactical and semantical relationship among the different languages. We then finetune multilingual models over a source language using sequence Mixup and evaluate it across a target language for varying language similarities, probing sequence-based interpolative data augmentation. We also evaluate sequence Mixup on combinations of similar and dissimilar languages and verify its transferability on various target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Mixup (Zhang et al., 2018) is a data augmentation technique that generates virtual training samples from convex combinations of individual inputs and labels. For a pair of data points (x, y) and (x , y ), Mixup creates a new sample ( x, y) by interpolating the data points using a ratio \u03bb, sampled from a Beta distribution, where",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x = \u03bb\u2022x + (1 \u2212 \u03bb)\u2022x and corresponding mixed label y = \u03bb\u2022y + (1 \u2212 \u03bb)\u2022y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We perform Mixup over the latent space representations for interpolating sequences. Pair of sentences (x, y) and (x , y ) are randomly sampled and interpolated in the hidden space using a L-layer encoder f (:, \u03b8). The hidden layer representations for x and x upto the k th layer are given as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h l = f l (h l\u22121 ; \u03b8), l \u2208 [1, k] h l = f l (h l\u22121 ; \u03b8), l \u2208 [1, k]",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "At the k th layer, the hidden representations of each token in x are linearly interpolated with each token in x . After this, h k is fed to the upper layers,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h k = \u03bbh k + (1 \u2212 \u03bb)h k h l = f l ( h l\u22121 ; \u03b8), l \u2208 [k + 1, L]",
"eq_num": "(2)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We evaluate the performance of zero-shot learning on sequence Mixup, where the model is trained on one source language or a set of source languages and evaluated on a target language. To choose the source and target, for each language we average sentence level embeddings of dataset instances and use the average embeddings as language representation. Using these representations, we find the cosine similarity among the languages as shown in Figure 1 . This approach can be extended for tasks on under-resourced languages, where models can be trained on a similar high-resourced language or a set of languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We evaluate our approach on POS tagging with datasets from the Universal Dependencies (UD) dataset 1 for ten different languages -Arabic (ar), Dutch (nl), French (fr), German (de), Hindi (hi), Indonesian (id), Italian (it), Marathi (mr), Vietnamese (vi) and Urdu (ur). For each experiment, we use 800 sentences for each source language for training, 100 sentences for validation and test each from the target language for evaluation. BERT-basemultilingual-cased (mBERT) has been used as an encoder in sequence Mixup and for obtaining the embeddings to evaluate cosine similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Training Setup: The learning rate is 5e-5 with Adam optimizer and batch size 16. All hyperparameters are selected based on validation F1-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We train the model on a single source language and a different target language dataset to evaluate its performance, as shown in Table 1 . As sequence Mixup trains on interpolated sequences, it regularizes the model and prevents overfitting, outperforming mBERT. For the target language Italian, we observe higher F1 when the source language is French and lower scores for source languages Hindi and Arabic. This is in line with the trend observed in Fig 1, where the cosine similarity for the language pair (French, Italian) is highest, lower for (Hindi, Italian) and lowest for (Arabic, Italian). We observe large improvements when sequence ",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 450,
"end": 456,
"text": "Fig 1,",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Single Language Transfer",
"sec_num": "3.1"
},
{
"text": "To extend our experiments, we choose a pair of languages on which the model is trained and present the results in Table 2 . This helps to infer in what manner additional language data impacts the performance. Languages Dutch, German and French have high cosine similarity, leading to larger improvement for the Dutch language compared to single language transfer. For target language Italian, F1-score decreases when trained on both Arabic and French data; this can be reasoned by the low cosine similarity of Arabic and Italian language. ",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi Language Transfer",
"sec_num": "3.2"
},
{
"text": "We analyze interpolative regularization based data augmentation over tokens for zero-shot crosslingual transfer of part-of-speech tagging across ten languages. Through extensive experiments over languages with varying syntactic and semantic structures on single and pair of languages, we pave the way for using interpolative data augmentation to improve the generalizability of neural networks for zero-shot transfer learning on downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Local additivity based data augmentation for semi-supervised NER",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhenghui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1241--1251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, and Diyi Yang. 2020. Local additivity based data augmentation for semi-supervised NER. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1241-1251, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "mixup: Beyond empirical risk minimization",
"authors": [
{
"first": "Hongyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Moustapha",
"middle": [],
"last": "Cisse",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empir- ical risk minimization. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Cosine Similarity of the ten languages.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "Mixup is applied over dissimilar languages, validating that Mixup is able to generate more diverse input samples which intersect with the target language structure and semantics.",
"type_str": "table",
"content": "<table><tr><td>Source</td><td>Target</td><td colspan=\"2\">mBERT Seq. Mixup</td></tr><tr><td colspan=\"3\">High Similarity Italian 94.52 Indonesian Vietnamese French 56.08 German Dutch 85.32 Hindi Marathi 64.41 Urdu Arabic 44.61</td><td>94.75 56.34 85.48 64.97 47.38</td></tr><tr><td>Hindi French Arabic</td><td colspan=\"2\">Low Similarity Italian 58.63 Arabic 39.63 Italian 25.41</td><td>63.71 40.55 28.42</td></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: F1-scores for POS tagging on Seq. Mixup and mBERT (mean of 10 runs). Improvements are shown with blue (\u2191) over mBERT.</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "F1-scores for POS tagging on Seq. Mixup and mBERT (mean of 10 runs) when trained on two source languages (New+Original). Improvements are shown with blue (\u2191) and poorer performance with red (\u2193).",
"type_str": "table",
"content": "<table/>"
}
}
}
}