Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- translation
|
5 |
+
language:
|
6 |
+
- ur
|
7 |
+
tags:
|
8 |
+
- urdu
|
9 |
+
- translation
|
10 |
+
- Transliteration
|
11 |
+
- parallel
|
12 |
+
- dataset
|
13 |
+
pretty_name: Roman Urdu Parl
|
14 |
+
size_categories:
|
15 |
+
- 1M<n<10M
|
16 |
+
---
|
17 |
+
|
18 |
+
# Roman Urdu Parallel Dataset - Split
|
19 |
+
|
20 |
+
This dataset is just another version of Roman-Urdu-Parl dataset split into train, validation and test set properly. Details follow below.
|
21 |
+
This repository contains a split version of the Roman-Urdu Parallel Dataset (Roman-Urdu-Parl) structured specifically to facilitate fair evaluation in machine transliteration tasks between Urdu and Roman-Urdu.
|
22 |
+
The Roman-Urdu language lacks standard orthography, leading to a wide range of transliteration variations for the same Urdu sentence.
|
23 |
+
This dataset addresses the need for non-overlapping train, test, and validation sets, mitigating data leakage that could otherwise inflate evaluation metrics.
|
24 |
+
|
25 |
+
|
26 |
+
## Original Dataset Overview
|
27 |
+
The original Roman-Urdu-Parl dataset consists of 6,365,808 rows, featuring parallel sentences in Urdu and Roman-Urdu.
|
28 |
+
These numbers can be replicated using the script at: scripts/check_stats.py
|
29 |
+
The unique characteristics of the original dataset are as follows:
|
30 |
+
|
31 |
+
- Unique Urdu sentences: 1,087,220
|
32 |
+
- Unique Roman-Urdu sentences: 3,999,102
|
33 |
+
- Rows where both sentences match: 167
|
34 |
+
- Rows where both sentences differ: 6,365,641
|
35 |
+
- Urdu sentences appearing only once: 90,637
|
36 |
+
- Roman-Urdu sentences appearing only once: 3,165,765
|
37 |
+
- Unique pairs of Urdu and Roman-Urdu sentences: 4,003,784
|
38 |
+
- Short sentences (fewer than 3 words): 2,321
|
39 |
+
|
40 |
+
## Motivation for a Structured Splitting Approach
|
41 |
+
### Summary
|
42 |
+
In the original dataset, variations in Roman-Urdu transliterations of the same Urdu sentence pose a risk of data leakage if randomly split into training, validation, and test sets.
|
43 |
+
Random splits may cause the same Urdu sentence (or its Roman-Urdu variations) to appear in multiple sets, leading the model to "memorize" rather than generalize transliteration patterns.
|
44 |
+
This structured split addresses the issue by ensuring unique sets with no overlap between training, validation, and test sets, thus promoting generalization.
|
45 |
+
The script to split the dataset is at : splitting_rup_data.py
|
46 |
+
|
47 |
+
### Detailed Motivation
|
48 |
+
The Roman-Urdu language is an informal, non-standardized way of writing Urdu using the Roman alphabet, which allows for considerable variation in transliteration.
|
49 |
+
Different speakers may transliterate the same Urdu sentence in numerous ways, leading to a high degree of variability in Roman-Urdu spellings.
|
50 |
+
For instance, words like “کتاب” in Urdu can be written in Roman-Urdu as “kitaab,” “kitab,” or “kittab,” depending on the writer’s style and regional dialect influences.
|
51 |
+
|
52 |
+
The original Roman-Urdu Parallel (Roman-Urdu-Parl) dataset takes advantage of this variability to create a large corpus of 6.365 million parallel sentences by pairing 1.1 million core Urdu sentences with multiple Roman-Urdu transliterations.
|
53 |
+
This variability-rich approach is invaluable for developing robust transliteration models.
|
54 |
+
However, this very characteristic introduces significant challenges for data splitting and evaluation, especially if the goal is to build a model capable of generalizing rather than memorizing specific transliterations.
|
55 |
+
|
56 |
+
### Issues with Random Splitting
|
57 |
+
A typical random splitting approach might divide the dataset into training, validation, and test sets without accounting for sentence variability.
|
58 |
+
This could lead to the following issues:
|
59 |
+
|
60 |
+
1. Overlap of Sentence Variations Across Sets:
|
61 |
+
Given the substantial transliteration variability, random splitting is likely to place different Roman-Urdu variations of the same Urdu sentence in multiple sets (e.g., training and test). As a result:
|
62 |
+
a. Data Leakage: The model may encounter different transliterations of the same sentence across training and evaluation sets, effectively "seeing" a portion of the test or validation data during training. This exposure creates data leakage, enabling the model to memorize specific transliteration patterns rather than learning to generalize.
|
63 |
+
b. Inflated Evaluation Scores: Due to data leakage, the model’s evaluation metrics—such as BLEU scores—could be artificially inflated. These metrics would then fail to reflect the model's true performance on genuinely unseen data, compromising the reliability of model assessments.
|
64 |
+
|
65 |
+
2. Challenges in Model Generalization:
|
66 |
+
If the same Urdu sentence (with different transliterations) appears in both training and evaluation sets, the model risks overfitting to common patterns rather than developing a nuanced understanding of transliteration rules.
|
67 |
+
The model's performance on genuinely novel sentence structures and transliteration styles is therefore likely to be less reliable.
|
68 |
+
|
69 |
+
### Importance of a Structured Split
|
70 |
+
To address these issues, a structured data split is essential.
|
71 |
+
By creating a split that ensures no Urdu sentence and its variations appear in more than one set (training, validation, or test).
|
72 |
+
|
73 |
+
## Dataset Splitting Strategy
|
74 |
+
To prevent data leakage and ensure a balanced evaluation, the dataset is split according to the following strategy:
|
75 |
+
|
76 |
+
1. Unique Sentence Selection for Validation and Test Sets:
|
77 |
+
Validation Set: 1,000 unique Urdu sentences with only one variation in the whole data of 6.3 million.
|
78 |
+
Test Set: Another 1,000 unique Urdu sentences (also with one variation) excluding those in the validation set.
|
79 |
+
|
80 |
+
2. Replicated Sentence Selection (2-10 Variations):
|
81 |
+
Validation Set: 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations. All variations these 2000 urdu sentences are added in this.
|
82 |
+
Test Set: An additional 2,000 Urdu sentences with 2-10 variations, ensuring no overlap with validation.
|
83 |
+
|
84 |
+
3. Training Set Composition:
|
85 |
+
All remaining sentences and their variations, excluding those selected for the validation and test sets, are included in the training set.
|
86 |
+
|
87 |
+
4. Smaller Subsets for Efficient Evaluation:
|
88 |
+
Smaller Validation and Test Sets: These subsets are created to speed up evaluations during model development, with each containing 3,000 sentences (unique and replicated).
|
89 |
+
Its also ensured that only one sentence is selected out of all the variations of an urdu sentence.
|
90 |
+
|
91 |
+
## Checking if Training Sentences Are Composed of Test Sentences or Their Repetitions
|
92 |
+
To ensure robust model performance and prevent data leakage, we checked if any training sentences were composed entirely of repeated test(or validation) sentences or their fragments.
|
93 |
+
In the original dataset, some sentences were simply repetitions of shorter sentences (e.g., "A B C D" and "A B C D A B C D"), which could lead to overfitting if these patterns appeared across training and test sets.
|
94 |
+
A Python script was developed to identify and flag these cases, allowing us to remove or separate such repetitive sentences across the dataset splits. This approach helps the model learn genuine transliteration patterns and avoids artificially inflated evaluation scores, promoting better generalization to unseen data.
|
95 |
+
The python script is at: scripts/check_substring.py
|
96 |
+
with our new split dataset this count is zero.
|
97 |
+
|
98 |
+
## Key Features of the Split
|
99 |
+
|
100 |
+
- No Overlap Between Sets: Ensures that no Urdu sentence (or its variations) appears in more than one set, effectively preventing data leakage.
|
101 |
+
- Variation Inclusion: The comprehensive test and validation sets include all variations of selected Urdu sentences, providing a robust evaluation of the model's ability to handle transliteration diversity.
|
102 |
+
- Smaller Subsets for Rapid Testing: Allows for quick testing during model development while preserving dataset integrity.
|
103 |
+
- Random Sampling with Fixed Seed: Reproducibility is ensured by using a fixed random state.
|
104 |
+
- Balanced Evaluation: Incorporates both unique sentences and replicated sentences with multiple variations for a complete assessment.
|
105 |
+
- Data Integrity Checks: Verifies that no Urdu sentences are shared between the sets, ensuring an accurate measure of generalization.
|
106 |
+
|
107 |
+
## Citation
|
108 |
+
|
109 |
+
Original Dataset paper:
|
110 |
+
|
111 |
+
@article{alam2022roman,
|
112 |
+
title={Roman-urdu-parl: Roman-urdu and urdu parallel corpus for urdu language understanding},
|
113 |
+
author={Alam, Mehreen and Hussain, Sibt Ul},
|
114 |
+
journal={Transactions on Asian and Low-Resource Language Information Processing},
|
115 |
+
volume={21},
|
116 |
+
number={1},
|
117 |
+
pages={1--20},
|
118 |
+
year={2022},
|
119 |
+
publisher={ACM New York, NY}
|
120 |
+
}
|
121 |
+
|
122 |
+
## Dataset Card Authors [optional]
|
123 |
+
|
124 |
+
I wouldnt call myself the author of this dataset because its the work of the greats. I just had to work on this data, so I created the splits properly.
|
125 |
+
Still if you are interested then my name is Umer (you can contact me on linkedin, username: UmerTariq1)
|