File size: 22,548 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 |
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:50.248125Z"
},
"title": "DMIX: Distance Constrained Interpolative Mixup",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": "",
"affiliation": {},
"email": "ramitsawhney@sharechat.co"
},
{
"first": "Megh",
"middle": [],
"last": "Thakkar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shrey",
"middle": [],
"last": "Pandit",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Debdoot",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {},
"email": "debdoot.iit@gmail.com"
},
{
"first": "Lucie",
"middle": [],
"last": "Flek",
"suffix": "",
"affiliation": {},
"email": "lucie.flek@uni-marburg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Interpolation-based regularisation methods have proven to be effective for various tasks and modalities. Mixup is a data augmentation method that generates virtual training samples from convex combinations of individual inputs and labels. We extend Mixup and propose DMIX, distance-constrained interpolative Mixup for sentence classification leveraging the hyperbolic space. DMIX achieves state-ofthe-art results on sentence classification over existing data augmentation methods across datasets in four languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Interpolation-based regularisation methods have proven to be effective for various tasks and modalities. Mixup is a data augmentation method that generates virtual training samples from convex combinations of individual inputs and labels. We extend Mixup and propose DMIX, distance-constrained interpolative Mixup for sentence classification leveraging the hyperbolic space. DMIX achieves state-ofthe-art results on sentence classification over existing data augmentation methods across datasets in four languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep learning models are effective across a wide range of applications. However, these models are prone to overfitting when only limited training data is available. Interpolation-based approaches such as Mixup (Zhang et al., 2018) have shown improved performance across different modalities. Mixup over latent representations of inputs has led to further improvements, as latent representations often carry more information than raw input samples. However, Mixup does not account for the spatial distribution of data samples, and chooses samples randomly.",
"cite_spans": [
{
"start": 210,
"end": 230,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While randomization in Mixup helps, augmenting Mixup's sample selection strategy with logic based on the similarity of the samples to be mixed can lead to improved generalization. Further, natural language text possesses hierarchical structures and complex geometries, which the standard Euclidean space cannot capture effectively. In such a scenario, hyperbolic geometry presents a solution in defining similarity between latent representations via hyperbolic distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose DMIX, a distance-constrained interpolative data augmentation method. Instead of choosing random inputs from the complete training * equal contribution distribution as in the case of vanilla Mixup, DMIX samples instances based on the (dis)similarity between latent representations of samples in the hyperbolic space. We probe DMIX through experiments on sentence classification tasks across four languages, obtaining state-of-the-art results over existing data augmentation techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpolative Mixup Given two data samples (Zhang et al., 2018) uses linear interpolation with mixing ratio r to generate the synthetic sample (Chen et al., 2020) performs performs linear interpolation over the latent representations of models.",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 143,
"end": 162,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x i , x j \u2208 X with labels y i , y j \u2208 Y , Mixup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "x = r\u2022x i + (1 \u2212 r)\u2022x j and corresponding mixed label y = r\u2022y i + (1 \u2212 r)\u2022y j . Interpolative Mixup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Let f \u03b8 (\u2022) be a model with parameters \u03b8 having N layers, f \u03b8,n (\u2022) denotes the n-th layer of the model and h n is the hidden space vector at layer n for n \u2208 [1, N ] and h 0 denotes the input vector. To perform interpolative Mixup at a layer k \u223c [1, N ], we first calculate the latent representations separately for the inputs for layers before the k-th layer. For input samples x i , x j , we let h i n , h j n denote their respective hidden state representations at layer n,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i n = f \u03b8,n (h i n\u22121 ), n \u2208 [1, k] h j n = f \u03b8,n (h j n\u22121 ), n \u2208 [1, k]",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We then perform Mixup over individual hidden state representations h i k , h j k from layer k as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h k = r\u2022h i k + (1 \u2212 r)\u2022h j k",
"eq_num": "(2)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The mixed hidden representation h k is used as the input for the continuing forward pass,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "hn = f \u03b8,n (hn\u22121); n \u2208 [k + 1, N ]",
"eq_num": "(3)"
}
],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "DMIX To perform distance-constrained interpolative Mixup, for a sample x i , we calculate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "its similarity with every other sample x \u2208 X between their sentence embedding. As natural language exhibits hierarchical structure, embeddings are more expressive when represented in the hyperbolic space (Dhingra et al., 2018) . We use hyperbolic distance D h = 2 tan \u22121 ( (\u2212x i ) \u2295 x ) as a similarity measure. We sort the distances in decreasing order for x i , and randomly select one sample x j from top-\u03c4 samples, where \u03c4 is a hyperparameter, which we call threshold. Formally, Table 1 : Performance comparison in terms of F1 score of DMix with vanilla Mixup and distance-constrained Mixup methods using different similarity techniques (average of 10 runs). Improvements are shown with blue (\u2191) and poorer performance with red (\u2193). * shows significant (p < 0.01) improvement over Mixup.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Dhingra et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 483,
"end": 490,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "xj \u223c top-\u03c4 ([D h (xi, x)\u2200x \u2208 X])",
"eq_num": "(4)"
}
],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "We observe that distance-constrained Mixup outperforms vanilla Mixup (p < 0.01) across numerous tasks and distance based (dis)similarity formulation, validating that similarity-based sample selection improves model performance, likely owing to enhanced diversity or minimizing sparsification across tasks. Within distance-constrained Mixup, we observe that DMIX, the hyperbolic distance variant outperforms Euclidean distance and cosine similarity measures. This suggests that the hyperbolic space is more capable of capturing the complex hierarchical information present in sentence representations, leading to more pronounced comparisons and sample selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R E T R A C T E D",
"sec_num": null
},
{
"text": "We perform an ablation study by varying the threshold \u03c4 for DMix and present it in Figure 1 1 . An increasing \u03c4 denotes a larger distribution space for sampling instances for Mixup, and a \u03c4 of 100% degenerating to vanilla Mixup. We observe an initial increase in the performance as we expand the sampling embedding space, and then it decreases, essentially decomposing into randomized Mixup. This suggests the existence of an optimum set of input samples for performing Mixup, and we conjecture it can be related to the sparsity in the embedding distribution of different languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Threshold Variation Analysis",
"sec_num": "3.2"
},
{
"text": "We propose DMIX, an interpolative regularization based data augmentation technique sampling inputs based on their latent hyperbolic similarity. DMIX achieves state-of-the-art results over existing data augmentation approaches on datasets in four languages.We further analyze DMIX through ablations over different similarity threshold values across the languages. DMIX being data-, modality-, and model-agnostic, holds potential to be applied on text, speech, and vision tasks. 1 We obtain similar results for TTC and GHC.",
"cite_spans": [
{
"start": 477,
"end": 478,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mix-Text: Linguistically-informed interpolation of hidden space for semi-supervised text classification",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2147--2157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix- Text: Linguistically-informed interpolation of hid- den space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2147- 2157, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Embedding text in hyperbolic spaces",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Shallue",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Dahl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. 2018. Em- bedding text in hyperbolic spaces. In Proceed- ings of the Twelfth Workshop on Graph-Based Meth- ods for Natural Language Processing (TextGraphs- 12), New Orleans, Louisiana, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "mixup: Beyond empirical risk minimization",
"authors": [
{
"first": "Hongyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Moustapha",
"middle": [],
"last": "Cisse",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empir- ical risk minimization. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Change in performance in terms of F1 with varying threshold for DMIX. A threshold of 100% decomposes DMIX into vanilla Mixup."
},
"TABREF0": {
"type_str": "table",
"text": "-DMIX (Hyperbolic) 79.19 * 99.30 * 32.00 * 69.67 *",
"num": null,
"content": "<table><tr><td colspan=\"2\">3 Experiments and Results</td></tr><tr><td colspan=\"2\">We evaluate DMIX on sentence classification tasks:</td></tr><tr><td colspan=\"2\">Arabic Hate Speech Detection AHS is a binary classification task over 3950 Arabic tweets contain-</td></tr><tr><td>ing hate speech.</td><td/></tr><tr><td colspan=\"2\">English SMS Spam Collection ESSC is a dataset with 5574 raw text messages classified as spam or</td></tr><tr><td>not spam.</td><td/></tr><tr><td colspan=\"2\">Turkish News Classification TTC-3600 contains 3600 Turkish news text across six news categories.</td></tr><tr><td colspan=\"2\">Gujarati Headline Classification GHC has 1632 Gujarati news headlines over three news categories.</td></tr><tr><td colspan=\"2\">Training Setup: Mixup is performed over a ran-dom layer sampled from all the layers of the model.</td></tr><tr><td colspan=\"2\">The model was trained with a learning rate of 2e-5,</td></tr><tr><td colspan=\"2\">with a training batch size of 8 and a weight decay</td></tr><tr><td colspan=\"2\">of 0.01. All hyperparameters were selected based</td></tr><tr><td colspan=\"2\">on validation F1-score.</td></tr><tr><td colspan=\"2\">3.1 Performance Comparison</td></tr><tr><td>Model</td><td>AHS ESSC TTC GHC</td></tr><tr><td>mBERT +Input Mixup +Sentence Mixup +Mixup</td><td>66.20 98.30 28.54 64.88 67.10 98.60 30.05 65.64 67.50 98.40 30.88 65.60 67.78 95.90 30.71 66.41</td></tr><tr><td colspan=\"2\">mBERT+distance-constrained Mixup (Ours) -Euclidean 74.42 * 86.87 30.89 65.88 -Cosine 77.50</td></tr></table>",
"html": null
}
}
}
} |