ACL-OCL / Base_JSON /prefixM /json /mrl /2021.mrl-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
22.5 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:50.248125Z"
},
"title": "DMIX: Distance Constrained Interpolative Mixup",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": "",
"affiliation": {},
"email": "ramitsawhney@sharechat.co"
},
{
"first": "Megh",
"middle": [],
"last": "Thakkar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shrey",
"middle": [],
"last": "Pandit",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Debdoot",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {},
"email": "debdoot.iit@gmail.com"
},
{
"first": "Lucie",
"middle": [],
"last": "Flek",
"suffix": "",
"affiliation": {},
"email": "lucie.flek@uni-marburg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Interpolation-based regularisation methods have proven to be effective for various tasks and modalities. Mixup is a data augmentation method that generates virtual training samples from convex combinations of individual inputs and labels. We extend Mixup and propose DMIX, distance-constrained interpolative Mixup for sentence classification leveraging the hyperbolic space. DMIX achieves state-ofthe-art results on sentence classification over existing data augmentation methods across datasets in four languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Interpolation-based regularisation methods have proven to be effective for various tasks and modalities. Mixup is a data augmentation method that generates virtual training samples from convex combinations of individual inputs and labels. We extend Mixup and propose DMIX, distance-constrained interpolative Mixup for sentence classification leveraging the hyperbolic space. DMIX achieves state-ofthe-art results on sentence classification over existing data augmentation methods across datasets in four languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep learning models are effective across a wide range of applications. However, these models are prone to overfitting when only limited training data is available. Interpolation-based approaches such as Mixup (Zhang et al., 2018) have shown improved performance across different modalities. Mixup over latent representations of inputs has led to further improvements, as latent representations often carry more information than raw input samples. However, Mixup does not account for the spatial distribution of data samples, and chooses samples randomly.",
"cite_spans": [
{
"start": 210,
"end": 230,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While randomization in Mixup helps, augmenting Mixup's sample selection strategy with logic based on the similarity of the samples to be mixed can lead to improved generalization. Further, natural language text possesses hierarchical structures and complex geometries, which the standard Euclidean space cannot capture effectively. In such a scenario, hyperbolic geometry presents a solution in defining similarity between latent representations via hyperbolic distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose DMIX, a distance-constrained interpolative data augmentation method. Instead of choosing random inputs from the complete training * equal contribution distribution as in the case of vanilla Mixup, DMIX samples instances based on the (dis)similarity between latent representations of samples in the hyperbolic space. We probe DMIX through experiments on sentence classification tasks across four languages, obtaining state-of-the-art results over existing data augmentation techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpolative Mixup Given two data samples (Zhang et al., 2018) uses linear interpolation with mixing ratio r to generate the synthetic sample (Chen et al., 2020) performs performs linear interpolation over the latent representations of models.",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 143,
"end": 162,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x i , x j \u2208 X with labels y i , y j \u2208 Y , Mixup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x = r\u2022x i + (1 \u2212 r)\u2022x j and corresponding mixed label y = r\u2022y i + (1 \u2212 r)\u2022y j . Interpolative Mixup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Let f \u03b8 (\u2022) be a model with parameters \u03b8 having N layers, f \u03b8,n (\u2022) denotes the n-th layer of the model and h n is the hidden space vector at layer n for n \u2208 [1, N ] and h 0 denotes the input vector. To perform interpolative Mixup at a layer k \u223c [1, N ], we first calculate the latent representations separately for the inputs for layers before the k-th layer. For input samples x i , x j , we let h i n , h j n denote their respective hidden state representations at layer n,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i n = f \u03b8,n (h i n\u22121 ), n \u2208 [1, k] h j n = f \u03b8,n (h j n\u22121 ), n \u2208 [1, k]",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We then perform Mixup over individual hidden state representations h i k , h j k from layer k as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h k = r\u2022h i k + (1 \u2212 r)\u2022h j k",
"eq_num": "(2)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The mixed hidden representation h k is used as the input for the continuing forward pass,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "hn = f \u03b8,n (hn\u22121); n \u2208 [k + 1, N ]",
"eq_num": "(3)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "DMIX To perform distance-constrained interpolative Mixup, for a sample x i , we calculate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "its similarity with every other sample x \u2208 X between their sentence embedding. As natural language exhibits hierarchical structure, embeddings are more expressive when represented in the hyperbolic space (Dhingra et al., 2018) . We use hyperbolic distance D h = 2 tan \u22121 ( (\u2212x i ) \u2295 x ) as a similarity measure. We sort the distances in decreasing order for x i , and randomly select one sample x j from top-\u03c4 samples, where \u03c4 is a hyperparameter, which we call threshold. Formally, Table 1 : Performance comparison in terms of F1 score of DMix with vanilla Mixup and distance-constrained Mixup methods using different similarity techniques (average of 10 runs). Improvements are shown with blue (\u2191) and poorer performance with red (\u2193). * shows significant (p < 0.01) improvement over Mixup.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Dhingra et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 483,
"end": 490,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "xj \u223c top-\u03c4 ([D h (xi, x)\u2200x \u2208 X])",
"eq_num": "(4)"
}
],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "We observe that distance-constrained Mixup outperforms vanilla Mixup (p < 0.01) across numerous tasks and distance based (dis)similarity formulation, validating that similarity-based sample selection improves model performance, likely owing to enhanced diversity or minimizing sparsification across tasks. Within distance-constrained Mixup, we observe that DMIX, the hyperbolic distance variant outperforms Euclidean distance and cosine similarity measures. This suggests that the hyperbolic space is more capable of capturing the complex hierarchical information present in sentence representations, leading to more pronounced comparisons and sample selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "We perform an ablation study by varying the threshold \u03c4 for DMix and present it in Figure 1 1 . An increasing \u03c4 denotes a larger distribution space for sampling instances for Mixup, and a \u03c4 of 100% degenerating to vanilla Mixup. We observe an initial increase in the performance as we expand the sampling embedding space, and then it decreases, essentially decomposing into randomized Mixup. This suggests the existence of an optimum set of input samples for performing Mixup, and we conjecture it can be related to the sparsity in the embedding distribution of different languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Threshold Variation Analysis",
"sec_num": "3.2"
},
{
"text": "We propose DMIX, an interpolative regularization based data augmentation technique sampling inputs based on their latent hyperbolic similarity. DMIX achieves state-of-the-art results over existing data augmentation approaches on datasets in four languages.We further analyze DMIX through ablations over different similarity threshold values across the languages. DMIX being data-, modality-, and model-agnostic, holds potential to be applied on text, speech, and vision tasks. 1 We obtain similar results for TTC and GHC.",
"cite_spans": [
{
"start": 477,
"end": 478,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mix-Text: Linguistically-informed interpolation of hidden space for semi-supervised text classification",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2147--2157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix- Text: Linguistically-informed interpolation of hid- den space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2147- 2157, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Embedding text in hyperbolic spaces",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Shallue",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Dahl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. 2018. Em- bedding text in hyperbolic spaces. In Proceed- ings of the Twelfth Workshop on Graph-Based Meth- ods for Natural Language Processing (TextGraphs- 12), New Orleans, Louisiana, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "mixup: Beyond empirical risk minimization",
"authors": [
{
"first": "Hongyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Moustapha",
"middle": [],
"last": "Cisse",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empir- ical risk minimization. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Change in performance in terms of F1 with varying threshold for DMIX. A threshold of 100% decomposes DMIX into vanilla Mixup."
},
"TABREF0": {
"type_str": "table",
"text": "-DMIX (Hyperbolic) 79.19 * 99.30 * 32.00 * 69.67 *",
"num": null,
"content": "<table><tr><td colspan=\"2\">3 Experiments and Results</td></tr><tr><td colspan=\"2\">We evaluate DMIX on sentence classification tasks:</td></tr><tr><td colspan=\"2\">Arabic Hate Speech Detection AHS is a binary classification task over 3950 Arabic tweets contain-</td></tr><tr><td>ing hate speech.</td><td/></tr><tr><td colspan=\"2\">English SMS Spam Collection ESSC is a dataset with 5574 raw text messages classified as spam or</td></tr><tr><td>not spam.</td><td/></tr><tr><td colspan=\"2\">Turkish News Classification TTC-3600 contains 3600 Turkish news text across six news categories.</td></tr><tr><td colspan=\"2\">Gujarati Headline Classification GHC has 1632 Gujarati news headlines over three news categories.</td></tr><tr><td colspan=\"2\">Training Setup: Mixup is performed over a ran-dom layer sampled from all the layers of the model.</td></tr><tr><td colspan=\"2\">The model was trained with a learning rate of 2e-5,</td></tr><tr><td colspan=\"2\">with a training batch size of 8 and a weight decay</td></tr><tr><td colspan=\"2\">of 0.01. All hyperparameters were selected based</td></tr><tr><td colspan=\"2\">on validation F1-score.</td></tr><tr><td colspan=\"2\">3.1 Performance Comparison</td></tr><tr><td>Model</td><td>AHS ESSC TTC GHC</td></tr><tr><td>mBERT +Input Mixup +Sentence Mixup +Mixup</td><td>66.20 98.30 28.54 64.88 67.10 98.60 30.05 65.64 67.50 98.40 30.88 65.60 67.78 95.90 30.71 66.41</td></tr><tr><td colspan=\"2\">mBERT+distance-constrained Mixup (Ours) -Euclidean 74.42 * 86.87 30.89 65.88 -Cosine 77.50</td></tr></table>",
"html": null
}
}
}
}