|
--- |
|
license: apache-2.0 |
|
dataset_info: |
|
- config_name: lcsalign-en |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 305023 |
|
num_examples: 2507 |
|
- name: train |
|
num_bytes: 455104487 |
|
num_examples: 4200000 |
|
- name: valid |
|
num_bytes: 21217 |
|
num_examples: 280 |
|
download_size: 318440274 |
|
dataset_size: 455430727 |
|
- config_name: lcsalign-hi |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 770118 |
|
num_examples: 2507 |
|
- name: train |
|
num_bytes: 1084853757 |
|
num_examples: 4200000 |
|
- name: valid |
|
num_bytes: 45670 |
|
num_examples: 280 |
|
download_size: 470820787 |
|
dataset_size: 1085669545 |
|
- config_name: lcsalign-hicm |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 561442 |
|
num_examples: 2507 |
|
- name: train |
|
num_bytes: 872213032 |
|
num_examples: 4200000 |
|
- name: valid |
|
num_bytes: 34530 |
|
num_examples: 280 |
|
download_size: 455501891 |
|
dataset_size: 872809004 |
|
- config_name: lcsalign-hicmdvg |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 798126 |
|
num_examples: 2507 |
|
- name: train |
|
num_bytes: 1104443176 |
|
num_examples: 4200000 |
|
- name: valid |
|
num_bytes: 47513 |
|
num_examples: 280 |
|
download_size: 491775164 |
|
dataset_size: 1105288815 |
|
- config_name: lcsalign-hicmrom |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 338176 |
|
num_examples: 2507 |
|
- name: train |
|
num_bytes: 467370942 |
|
num_examples: 4200000 |
|
- name: valid |
|
num_bytes: 20431 |
|
num_examples: 280 |
|
download_size: 337385029 |
|
dataset_size: 467729549 |
|
- config_name: lcsalign-noisyhicmrom |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 462418855 |
|
num_examples: 4200000 |
|
- name: test |
|
num_bytes: 334401 |
|
num_examples: 2507 |
|
- name: valid |
|
num_bytes: 20246 |
|
num_examples: 280 |
|
download_size: 379419827 |
|
dataset_size: 462773502 |
|
configs: |
|
- config_name: lcsalign-en |
|
data_files: |
|
- split: test |
|
path: lcsalign-en/test-* |
|
- split: train |
|
path: lcsalign-en/train-* |
|
- split: valid |
|
path: lcsalign-en/valid-* |
|
- config_name: lcsalign-hi |
|
data_files: |
|
- split: test |
|
path: lcsalign-hi/test-* |
|
- split: train |
|
path: lcsalign-hi/train-* |
|
- split: valid |
|
path: lcsalign-hi/valid-* |
|
- config_name: lcsalign-hicm |
|
data_files: |
|
- split: test |
|
path: lcsalign-hicm/test-* |
|
- split: train |
|
path: lcsalign-hicm/train-* |
|
- split: valid |
|
path: lcsalign-hicm/valid-* |
|
- config_name: lcsalign-hicmdvg |
|
data_files: |
|
- split: test |
|
path: lcsalign-hicmdvg/test-* |
|
- split: train |
|
path: lcsalign-hicmdvg/train-* |
|
- split: valid |
|
path: lcsalign-hicmdvg/valid-* |
|
- config_name: lcsalign-hicmrom |
|
data_files: |
|
- split: test |
|
path: lcsalign-hicmrom/test-* |
|
- split: train |
|
path: lcsalign-hicmrom/train-* |
|
- split: valid |
|
path: lcsalign-hicmrom/valid-* |
|
- config_name: lcsalign-noisyhicmrom |
|
data_files: |
|
- split: train |
|
path: lcsalign-noisyhicmrom/train-* |
|
- split: test |
|
path: lcsalign-noisyhicmrom/test-* |
|
- split: valid |
|
path: lcsalign-noisyhicmrom/valid-* |
|
task_categories: |
|
- translation |
|
language: |
|
- hi |
|
- en |
|
tags: |
|
- codemix |
|
- indicnlp |
|
- hindi |
|
- english |
|
- multilingual |
|
pretty_name: Hindi-English Codemix Datasets |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
# Dataset Card for Hindi English Codemix Dataset - HINMIX |
|
|
|
**HINMIX is a massive parallel codemixed dataset for Hindi-English code switching.** |
|
|
|
See the [📚 paper on arxiv](https://arxiv.org/abs/2403.16771) to dive deep into this synthetic codemix data generation pipeline. |
|
Dataset contains 4.2M fully parallel sentences in 6 Hindi-English forms. |
|
|
|
Further, we release gold standard codemix dev and test set manually translated by proficient bilingual annotators. |
|
- Dev Set consists of 280 examples |
|
- Test set consists of 2507 examples |
|
|
|
To load the dataset: |
|
```python |
|
!pip install datasets |
|
from datasets import load_dataset |
|
hinmix_ds = load_dataset("kartikagg98/HINMIX_hi-en","lcsalign-hicmrom") #choose one from lcsalign-en, lcsalign-hicm, lcsalign-hi, lcsalign-hicmrom, lcsalign-noisyhicmrom, lcsalign-hicmdvg |
|
print ([hinmix_ds[i][10]['text'] for i in ['train','valid','test']]) |
|
``` |
|
Output: |
|
```bash |
|
>>> ['events hi samay men kahin south malabar men ghati hai.', |
|
'beherhaal, pulis ne body ko sector-16 ke hospital ki mortuary men rakhva diya hai.', |
|
'yah hamare country ke liye reality men mandatory thing hai.'] |
|
|
|
``` |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
We construct a large synthetic Hinglish-English dataset by leveraging a bilingual Hindi-English corpus. |
|
Split: Train, test, valid |
|
Subsets: |
|
- **Hi** - Hindi in devanagiri script (**Example**: *अमेरिकी लोग अब पहले जितनी गैस नहीं खरीदते।*) |
|
- **Hicm** - Hindi sentences with codemix words substituted in English (**Example**: *American people अब पहले जितनी gas नहीं खरीदते।*) |
|
- **Hicmrom** - Hicm with romanized hindi words (**Example**: *American people ab pahle jitni gas nahin kharidte.*) |
|
- **Hicmdvg** - Hicm with transliterated english words to devangiri (**Example**: *अमेरिकन पेओपल अब पहले जितनी गैस नहीं खरीदते।*) |
|
- **NoisyHicmrom** - synthetic noise added to Hicmrom sentences to improve model robustness (**Example**: *Aerican people ab phle jtni gas nain khridte.*) |
|
|
|
|
|
### Dataset Sources [optional] |
|
|
|
- **Repository:** https://github.com/Kartikaggarwal98/Robust_Codemix_MT |
|
- **Paper:** https://arxiv.org/abs/2403.16771 |
|
|
|
## Uses |
|
|
|
Dataset can be used individually to train machine translation models for codemix hindi translation in any direction. |
|
Dataset can be appended with other languages from similar language family to transfer codemixing capabilities in a zero shot manner. |
|
Zero-shot translation on bangla-english showed great performance without even developing bangla codemix corpus. |
|
An indic-multilingual model with this data as a subset can improve codemixing by a significant margin. |
|
|
|
### Source Data |
|
|
|
[IITB Parallel corpus](https://www.cfilt.iitb.ac.in/iitb_parallel/) is chosen as the base dataset to translate into codemix forms. |
|
The corpus contains widely diverse content from news articles, judicial domain, indian government websites, wikipedia, book translations, etc. |
|
|
|
#### Data Collection and Processing |
|
|
|
1. Given a source- target sentence pair S || T , we generate the synthetic code-mixed data by substituting words in the matrix language sentence with the corresponding words from the embedded language sentence. |
|
Here, hindi is the matrix language which forms the syntactic and morphological structure of CM sentence. English becomes the embedded language from which we borrow words. |
|
1. Create inclusion list of nouns, adjectives and quantifiers which are candidates for substitution. |
|
1. POS-tag the corpus using any tagger. We used [LTRC](http://ltrc.iiit.ac.in/analyzer/) for hindi tagging. |
|
1. Use fast-align for learning alignment model b/w parallel corpora (Hi-En). Once words are aligned, next task is switch words from english sentences to hindi sentence based on inclusion list. |
|
1. Use heuristics to replace n-gram words and create multiple codemix mappings of the same hindi sentence. |
|
1. Filter sentences using deterministic and perplexity metrics from a multilingual model like XLM. |
|
1. Add synthetic noise like omission, switch, typo, random replacement to consider the noisy nature of codemix text. |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61565c721b6f2789680793eb/KhhuM9Ze2-UrllHh6vRGL.png) |
|
|
|
|
|
### Recommendations |
|
|
|
It's important to recognize that this work, conducted three years ago, utilized the state-of-the-art tools available at the time for each step of the pipeline. |
|
Consequently, the quality was inherently tied to the performance of these tools. Given the advancements in large language models (LLMs) today, there is potential to enhance the dataset. |
|
Implementing rigorous filtering processes, such as deduplication of similar sentences and removal of ungrammatical sentences, could significantly improve the training of high-quality models. |
|
|
|
## Citation Information |
|
|
|
``` |
|
@misc{kartik2024synthetic, |
|
title={Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation}, |
|
author={Kartik and Sanjana Soni and Anoop Kunchukuttan and Tanmoy Chakraborty and Md Shad Akhtar}, |
|
year={2024}, |
|
eprint={2403.16771}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
## Dataset Card Contact |
|
|
|
kartik@ucsc.edu |