Muhammad Umer Tariq Butt commited on
Commit
bfd5fab
1 Parent(s): 5156e05

Add Roman-Urdu-Parl-split dataset files with LFS

Browse files
.gitattributes CHANGED
@@ -1,58 +1,2 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
- *.model filter=lfs diff=lfs merge=lfs -text
14
- *.msgpack filter=lfs diff=lfs merge=lfs -text
15
- *.npy filter=lfs diff=lfs merge=lfs -text
16
- *.npz filter=lfs diff=lfs merge=lfs -text
17
- *.onnx filter=lfs diff=lfs merge=lfs -text
18
- *.ot filter=lfs diff=lfs merge=lfs -text
19
- *.parquet filter=lfs diff=lfs merge=lfs -text
20
- *.pb filter=lfs diff=lfs merge=lfs -text
21
- *.pickle filter=lfs diff=lfs merge=lfs -text
22
- *.pkl filter=lfs diff=lfs merge=lfs -text
23
- *.pt filter=lfs diff=lfs merge=lfs -text
24
- *.pth filter=lfs diff=lfs merge=lfs -text
25
- *.rar filter=lfs diff=lfs merge=lfs -text
26
- *.safetensors filter=lfs diff=lfs merge=lfs -text
27
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
- *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tar filter=lfs diff=lfs merge=lfs -text
30
- *.tflite filter=lfs diff=lfs merge=lfs -text
31
- *.tgz filter=lfs diff=lfs merge=lfs -text
32
- *.wasm filter=lfs diff=lfs merge=lfs -text
33
- *.xz filter=lfs diff=lfs merge=lfs -text
34
- *.zip filter=lfs diff=lfs merge=lfs -text
35
- *.zst filter=lfs diff=lfs merge=lfs -text
36
- *tfevents* filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - uncompressed
38
- *.pcm filter=lfs diff=lfs merge=lfs -text
39
- *.sam filter=lfs diff=lfs merge=lfs -text
40
- *.raw filter=lfs diff=lfs merge=lfs -text
41
- # Audio files - compressed
42
- *.aac filter=lfs diff=lfs merge=lfs -text
43
- *.flac filter=lfs diff=lfs merge=lfs -text
44
- *.mp3 filter=lfs diff=lfs merge=lfs -text
45
- *.ogg filter=lfs diff=lfs merge=lfs -text
46
- *.wav filter=lfs diff=lfs merge=lfs -text
47
- # Image files - uncompressed
48
- *.bmp filter=lfs diff=lfs merge=lfs -text
49
- *.gif filter=lfs diff=lfs merge=lfs -text
50
- *.png filter=lfs diff=lfs merge=lfs -text
51
- *.tiff filter=lfs diff=lfs merge=lfs -text
52
- # Image files - compressed
53
- *.jpg filter=lfs diff=lfs merge=lfs -text
54
- *.jpeg filter=lfs diff=lfs merge=lfs -text
55
- *.webp filter=lfs diff=lfs merge=lfs -text
56
- # Video files - compressed
57
- *.mp4 filter=lfs diff=lfs merge=lfs -text
58
- *.webm filter=lfs diff=lfs merge=lfs -text
 
1
+ *.csv filter=lfs diff=lfs merge=lfs -text
2
+ *.txt filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original_data/data.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21f0f75f8c423e2413348c8961c05fb3236fa5d43bac4d6f3e8d8a5496360f2d
3
+ size 1199473375
original_data/dataset_stats.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Number of rows: 6365808
2
+
3
+ Number of unique Urdu sentences: 1087220
4
+ Number of unique Roman-Urdu sentences: 3999102
5
+
6
+ Number of rows where both column values are the same: 167
7
+ Number of rows where both column values are different: 6365641
8
+
9
+ Number of rows where the Urdu sentence appears only once in the dataset: 90637
10
+ Number of rows where the Roman-Urdu sentence appears only once in the dataset: 3165765
11
+
12
+ Number of rows where the combination occurs "only once" in the whole dataset: 3170561
13
+ Number of unique pairs of Urdu and Roman-Urdu sentences: 4003784
14
+
15
+ Number of sentences which are less than 3 words: 2321
original_data/roman-urdu.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8a69a6d35115fb932d605329cfd19ed72007eae1dc9ff9448b9fcd5393f428
3
+ size 477550031
original_data/splitting_strategy_rur_to_ur.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Original Issue**:
2
+
3
+ The dataset comprises 6.365 million parallel sentences in Urdu and Roman-Urdu. Many Roman-Urdu sentences are just variations of the same Urdu sentence due to different transliteration styles. If we randomly split this dataset into training, validation, and test sets, there's a high chance that variations of the same Urdu sentence will appear in multiple sets. This overlap can lead to data leakage, causing the model to memorize specific sentence pairs rather than learning to generalize transliteration patterns. Consequently, evaluation metrics like BLEU scores may be artificially inflated, not accurately reflecting the model's true performance on unseen data.
4
+
5
+
6
+ **Splitting Strategy**:
7
+ To address this issue, the dataset is split into training, validation, and test sets in a way that ensures no Urdu sentence (and its variations) appears in more than one set. The strategy involves grouping sentences by unique Urdu text and carefully selecting sentences based on the number of their variations.
8
+
9
+ 1. **Load and Preprocess the Data**
10
+
11
+ Load the Dataset: Read the CSV file containing Urdu and Roman-Urdu sentence pairs into a Pandas DataFrame.
12
+ Remove Missing Entries: Drop any rows where the 'Urdu text' is missing.
13
+ Group by Urdu Sentences: Group the data by 'Urdu text' and aggregate all corresponding 'Roman-Urdu text' variations into lists.
14
+ Count Variations: Add a 'count' column representing the number of Roman-Urdu variations for each Urdu sentence.
15
+
16
+ 2. **Select Unique Sentences for Validation and Test Sets**
17
+
18
+ Validation Set:
19
+ Select 1,000 Urdu sentences that occur only once in the dataset (i.e., sentences with a 'count' of 1).
20
+ Include their corresponding Roman-Urdu text.
21
+ Test Set:
22
+ From the remaining Urdu sentences with a 'count' of 1 (excluding those in the validation set), select another 1,000 sentences.
23
+ Include their corresponding Roman-Urdu text.
24
+
25
+ 3. **Select Replicated Sentences with Variations for Validation and Test Sets**
26
+
27
+ Validation Set:
28
+ Select 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations (i.e., 'count' > 1 and 'count' ≤ 10).
29
+ Include all variations of these Urdu sentences in the validation set.
30
+ Test Set:
31
+ From the remaining Urdu sentences with 2 to 10 variations (excluding those in the validation set), select another 2,000 sentences.
32
+ Include all variations of these Urdu sentences in the test set.
33
+
34
+ 4. **Prepare the Training Set**
35
+
36
+ Exclude Test and Validation Sentences:
37
+ Remove all Urdu sentences (and their variations) present in the test and validation sets from the original dataset.
38
+ Form the Training Set:
39
+ The training set consists of all remaining Urdu sentences and their corresponding Roman-Urdu variations not included in the test or validation sets.
40
+
41
+ 5. **Create Smaller Subsets for Quick Evaluation**
42
+
43
+ Purpose: Facilitate faster testing and validation during model development.
44
+ Validation Subset:
45
+ From the unique Urdu sentences in the validation set, randomly select 1,000 sentences (they only have one variation).
46
+ From the replicated Urdu sentences in the validation set, for each Urdu sentence, randomly select only one Roman-Urdu variation.
47
+ Combine these to form a smaller validation set of 3,000 sentences.
48
+ Test Subset:
49
+ Repeat the same process for the test set to create a smaller test set of 3,000 sentences.
50
+
51
+
52
+ **Key Points**:
53
+ - No Overlap Between Sets: By excluding any Urdu sentences used in the test and validation sets from the training set, the strategy ensures no overlap, preventing data leakage.
54
+
55
+ - Inclusion of All Variations: The large test and validation sets include all variations of selected Urdu sentences to thoroughly evaluate the model's ability to handle different transliterations.
56
+
57
+ - Smaller Subsets for Efficiency: Smaller test and validation sets contain only one variation per Urdu sentence, allowing for quicker evaluations during model development without compromising the integrity of the results.
58
+
59
+ - Random Sampling with Fixed Seed: A fixed random_state (e.g., 42) is used in all random sampling steps to ensure reproducibility of the data splits.
60
+
61
+ - Balanced Evaluation: The strategy includes both unique sentences and those with multiple variations, providing a comprehensive evaluation of the model's performance across different levels of sentence frequency and complexity.
62
+
63
+ - Data Integrity Checks: After splitting, the sizes of the datasets are verified, and checks are performed to confirm that no Urdu sentences are shared between the training, validation, and test sets.
64
+
65
+ - Generalization Focus:By ensuring the model does not see any test or validation sentences during training, the evaluation metrics will accurately reflect the model's ability to generalize to unseen data.
66
+
67
+ - We also tested for checked for if the training sentences are made up entirely of (test sentences or their repetitions) and found that there were no matches. (file: Transliterate/RUP/finetuning/scripts/one_time_usage/filter_uniqueurdu_data.py)
68
+
original_data/urdu.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:820b16cc225e19e63e0aa6a3868bbca6e25211ef5b0ca307c73b7643b18cc202
3
+ size 714048987
scripts/check_stats.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+
3
+ # File path to the input CSV file
4
+ input_file_path = '../original_data/data.csv'
5
+ # Output file path
6
+ output_file_path = 'top_50_urdu.json'
7
+
8
+ # Load the CSV file into a Pandas DataFrame
9
+ df = pd.read_csv(input_file_path, encoding='utf-8')
10
+
11
+
12
+ # count the number of rows
13
+ num_rows = df.shape[0]
14
+ print(f"Number of rows: {num_rows}")
15
+
16
+ # Count the number of rows where both column values are the same
17
+ same_values_count = df[df['Urdu text'] == df['Roman-Urdu text']].shape[0]
18
+ print(f"Number of rows where both column values are the same: {same_values_count}")
19
+
20
+ # count the number of rows where both column values are different
21
+ different_values_count = df[df['Urdu text'] != df['Roman-Urdu text']].shape[0]
22
+ print(f"Number of rows where both column values are different: {different_values_count}")
23
+
24
+ # count the number of unique Urdu sentences
25
+ unique_urdu_sentences = df['Urdu text'].nunique()
26
+ print(f"Number of unique Urdu sentences: {unique_urdu_sentences}")
27
+
28
+ # count the number of unique Roman-Urdu sentences
29
+ unique_roman_urdu_sentences = df['Roman-Urdu text'].nunique()
30
+ print(f"Number of unique Roman-Urdu sentences: {unique_roman_urdu_sentences}")
31
+
32
+ # count the number of rows where the Urdu and Roman-Urdu sentences do not appear anywhere else together
33
+ unique_pairs = df.groupby(['Urdu text', 'Roman-Urdu text']).size().reset_index(name='count')
34
+ unique_pairs = unique_pairs[unique_pairs['count'] == 1].shape[0]
35
+ print(f"Number of rows where the combination occurs only once in the whole dataset: {unique_pairs}")
36
+
37
+ # count the number of unique pairs of Urdu and Roman-Urdu sentences
38
+ unique_pairs = df.drop_duplicates().shape[0]
39
+ print(f"Number of unique pairs of Urdu and Roman-Urdu sentences: {unique_pairs}")
40
+
41
+
42
+ # count the number of rows where the Urdu sentence appears only once in the dataset
43
+ urdu_sentence_counts = df['Urdu text'].value_counts()
44
+ urdu_sentence_counts = urdu_sentence_counts[urdu_sentence_counts == 1].shape[0]
45
+ print(f"Number of rows where the Urdu sentence appears only once in the dataset: {urdu_sentence_counts}")
46
+
47
+ # count the number of rows where the Roman-Urdu sentence appears only once in the dataset
48
+ roman_urdu_sentence_counts = df['Roman-Urdu text'].value_counts()
49
+ roman_urdu_sentence_counts = roman_urdu_sentence_counts[roman_urdu_sentence_counts == 1].shape[0]
50
+ print(f"Number of rows where the Roman-Urdu sentence appears only once in the dataset: {roman_urdu_sentence_counts}")
51
+
52
+
53
+ # count the number of where the urdu sentences appear more than once but less than 11 times in the whole dataset
54
+ urdu_sentence_counts = df['Urdu text'].value_counts()
55
+ urdu_sentence_counts = urdu_sentence_counts[(urdu_sentence_counts > 1) & (urdu_sentence_counts <= 10)].shape[0]
56
+ print(f"Number of rows where the Urdu sentence appears more than once but less than 11 times in the whole dataset: {urdu_sentence_counts}")
scripts/check_substring.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Check if the training sentences are made up entirely of (test sentences or their repetitions)
4
+
5
+ # Observation : Some of the sentences in original dataset comprised of other sentences in the dataset
6
+ # For example, One sentence would be "A B C D" and another sentence would be "A B C D A B C D"
7
+ # This is not a good thing as the model can easily overfit on the training data and
8
+ # if the other sentence thats being repeated is in the test set, the model will perform very well on the test set but not on the real world data
9
+
10
+ import pandas as pd
11
+ import ahocorasick
12
+ from tqdm import tqdm
13
+ import json
14
+
15
+ # File paths
16
+ training_set_path = '../train_set.csv'
17
+ test_set_path = '../small_validation_set.csv'
18
+
19
+ print("Loading the CSV files...")
20
+ # Load the CSV files into Pandas DataFrames
21
+ test_set = pd.read_csv(test_set_path, encoding='utf-8')
22
+ training_set = pd.read_csv(training_set_path, encoding='utf-8')
23
+
24
+ print("CSV files loaded successfully.")
25
+
26
+ # Prepare the test sentences
27
+ test_sentences = test_set['Urdu text'].dropna()
28
+ # Remove extra spaces and standardize
29
+ test_sentences = test_sentences.apply(lambda x: ' '.join(x.strip().split()))
30
+ # Skip sentences with only one word if desired
31
+ test_sentences = test_sentences[test_sentences.str.split().str.len() > 1].unique()
32
+ test_sentences_set = set(test_sentences)
33
+
34
+ print(f"Number of test sentences: {len(test_sentences_set)}")
35
+
36
+ # Build the Aho-Corasick automaton
37
+ print("Building the Aho-Corasick automaton with test sentences...")
38
+ A = ahocorasick.Automaton()
39
+
40
+ for idx, test_sentence in enumerate(test_sentences):
41
+ A.add_word(test_sentence, (idx, test_sentence))
42
+
43
+ A.make_automaton()
44
+ print("Automaton built successfully.")
45
+
46
+ # Initialize matches dictionary
47
+ matches = {}
48
+
49
+ print("Processing training sentences...")
50
+ training_sentences = training_set['Urdu text'].dropna()
51
+ # Remove extra spaces and standardize
52
+ training_sentences = training_sentences.apply(lambda x: ' '.join(x.strip().split()))
53
+ training_sentences = training_sentences.unique()
54
+
55
+ for training_sentence in tqdm(training_sentences):
56
+ s = training_sentence
57
+ s_length = len(s)
58
+ matches_in_s = []
59
+ for end_index, (insert_order, test_sentence) in A.iter(s):
60
+ start_index = end_index - len(test_sentence) + 1
61
+ matches_in_s.append((start_index, end_index, test_sentence))
62
+ if not matches_in_s:
63
+ continue
64
+ # Sort matches by start_index
65
+ matches_in_s.sort(key=lambda x: x[0])
66
+ # Now check if matches cover the entire training sentence without gaps
67
+ # And all matches are of the same test sentence
68
+ covers_entire_sentence = True
69
+ current_index = 0
70
+ first_test_sentence = matches_in_s[0][2]
71
+ all_same_test_sentence = True
72
+ for start_index, end_index, test_sentence in matches_in_s:
73
+ if start_index != current_index:
74
+ covers_entire_sentence = False
75
+ break
76
+ if test_sentence != first_test_sentence:
77
+ all_same_test_sentence = False
78
+ break
79
+ current_index = end_index + 1
80
+ if covers_entire_sentence and current_index == s_length and all_same_test_sentence:
81
+ # Training sentence is made up entirely of repetitions of test_sentence
82
+ if test_sentence not in matches:
83
+ matches[test_sentence] = []
84
+ matches[test_sentence].append(s)
85
+ print("Processing completed.")
86
+ print("Number of matches: ", sum(len(v) for v in matches.values()))
87
+
88
+ # Optionally, save matches to a JSON file
89
+ output_file_path = '/netscratch/butt/Transliterate/RUP/finetuning/scripts/one_time_usage/test_training_matches.json'
90
+ with open(output_file_path, 'w', encoding='utf-8') as json_file:
91
+ json.dump(matches, json_file, ensure_ascii=False, indent=4)
92
+
93
+ print(f"Matches have been written to {output_file_path}")
scripts/splitting_rup_data.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # bash /home/butt/run_docker_cpu.sh python splitting_rup_data.py
2
+
3
+ import pandas as pd
4
+ import numpy as np
5
+
6
+ # File path to the input CSV file
7
+ input_file_path = '../original_data/data.csv'
8
+ # Output file paths for training, test, and validation sets
9
+ base_path = "."
10
+
11
+ train_output_path = base_path + 'train_set.csv'
12
+ test_output_path = base_path + 'test_set.csv'
13
+ validation_output_path = base_path + 'validation_set.csv'
14
+ small_test_output_path = base_path + 'small_test_set.csv'
15
+ small_validation_output_path = base_path + 'small_validation_set.csv'
16
+
17
+ NUMBER_OF_UNIQUE_SENTENCES = 1500
18
+ NUMBER_OF_REPLICATED_SENTENCES = 3000
19
+ REPLICATION_RATE = 10
20
+
21
+ # Load the CSV file into a Pandas DataFrame
22
+ df = pd.read_csv(input_file_path, encoding='utf-8')
23
+
24
+ # Drop rows where 'Urdu text' is NaN
25
+ df = df.dropna(subset=['Urdu text'])
26
+
27
+ # Group by 'Urdu text' and aggregate the corresponding 'Roman-Urdu text'
28
+ grouped = df.groupby('Urdu text')['Roman-Urdu text'].apply(list).reset_index()
29
+
30
+ # Add a 'count' column to store the number of occurrences
31
+ grouped['count'] = grouped['Roman-Urdu text'].apply(len)
32
+
33
+ # Select NUMBER_OF_UNIQUE_SENTENCES least occurring groupbys (unique sentences without replication in the dataset) for validation
34
+ unique_sentences_val = grouped[grouped['count'] == 1].sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
35
+ unique_sentences_val = unique_sentences_val.explode('Roman-Urdu text') # Convert list to individual rows
36
+
37
+ # select NUMBER_OF_UNIQUE_SENTENCES least occuring groupbys for test but they should not be in validation set (unique_sentences_val)
38
+ unique_sentences_test = grouped[grouped['count'] == 1]
39
+ unique_sentences_test = unique_sentences_test[~unique_sentences_test['Urdu text'].isin(unique_sentences_val['Urdu text'])]
40
+ unique_sentences_test = unique_sentences_test.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
41
+ unique_sentences_test = unique_sentences_test.explode('Roman-Urdu text') # Convert list to individual rows
42
+
43
+ # variables replicated are for whole test/val sets and one_replicated are for small test/val sets
44
+
45
+ # Select NUMBER_OF_REPLICATED_SENTENCES groupbys from sentences that appear less than or equal to REPLICATION_RATE times
46
+ replicated_sentences_val = grouped[(grouped['count'] <= REPLICATION_RATE) & (grouped['count'] > 1)].sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
47
+
48
+ # do the same but for test set and they should not be in validation set
49
+ replicated_sentences_test = grouped[(grouped['count'] <= REPLICATION_RATE) & (grouped['count'] > 1)]
50
+ replicated_sentences_test = replicated_sentences_test[~replicated_sentences_test['Urdu text'].isin(replicated_sentences_val['Urdu text'])]
51
+ replicated_sentences_test = replicated_sentences_test.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
52
+
53
+ # select any 1 sentence from each group of the replicated sentences
54
+ one_replicated_sentences_val = replicated_sentences_val.groupby('Urdu text').apply(lambda x: x.sample(1, random_state=42)).reset_index(drop=True)
55
+ # do the same but for test set
56
+ one_replicated_sentences_test = replicated_sentences_test.groupby('Urdu text').apply(lambda x: x.sample(1, random_state=42)).reset_index(drop=True)
57
+
58
+ # explode both the replicated and one_replicated
59
+ replicated_sentences_val = replicated_sentences_val.explode('Roman-Urdu text')
60
+ one_replicated_sentences_val = one_replicated_sentences_val.explode('Roman-Urdu text')
61
+
62
+ replicated_sentences_test = replicated_sentences_test.explode('Roman-Urdu text')
63
+ one_replicated_sentences_test = one_replicated_sentences_test.explode('Roman-Urdu text')
64
+
65
+ # Prepare the test and validation sets
66
+ test_set = pd.concat([unique_sentences_test, replicated_sentences_test]).reset_index(drop=True)
67
+ validation_set = pd.concat([unique_sentences_val, replicated_sentences_val]).reset_index(drop=True)
68
+
69
+ # create smaller test and validation sets
70
+ # subset NUMBER_OF_UNIQUE_SENTENCES from unique test
71
+ small_unique_sentences_test = unique_sentences_test.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
72
+ # subset NUMBER_OF_UNIQUE_SENTENCES from unique validation
73
+ small_unique_sentences_val = unique_sentences_val.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
74
+
75
+ # subset NUMBER_OF_REPLICATED_SENTENCES from replicated test
76
+ small_replicated_sentences_test = replicated_sentences_test.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
77
+ # subset NUMBER_OF_REPLICATED_SENTENCES from replicated validation
78
+ small_replicated_sentences_val = replicated_sentences_val.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
79
+
80
+
81
+ # explode all the small sets
82
+ small_unique_sentences_test = small_unique_sentences_test.explode('Roman-Urdu text')
83
+ small_unique_sentences_val = small_unique_sentences_val.explode('Roman-Urdu text')
84
+ small_replicated_sentences_test = small_replicated_sentences_test.explode('Roman-Urdu text')
85
+ small_replicated_sentences_val = small_replicated_sentences_val.explode('Roman-Urdu text')
86
+
87
+ # combine the small sets
88
+ small_test_set = pd.concat([small_unique_sentences_test, small_replicated_sentences_test]).reset_index(drop=True)
89
+ small_validation_set = pd.concat([small_unique_sentences_val, small_replicated_sentences_val]).reset_index(drop=True)
90
+
91
+ # Prepare the training set by excluding the test and validation sets from the original DataFrame
92
+ # training set should be the whole data except fpr test_set and validation_set
93
+ training_set = df[~df['Urdu text'].isin(test_set['Urdu text']) & ~df['Urdu text'].isin(validation_set['Urdu text'])]
94
+
95
+
96
+ # Save only 'Urdu text' and 'Roman-Urdu text' columns to CSV files
97
+ training_set[['Urdu text', 'Roman-Urdu text']].to_csv(train_output_path, index=False, encoding='utf-8')
98
+ test_set[['Urdu text', 'Roman-Urdu text']].to_csv(test_output_path, index=False, encoding='utf-8')
99
+ validation_set[['Urdu text', 'Roman-Urdu text']].to_csv(validation_output_path, index=False, encoding='utf-8')
100
+ small_test_set[['Urdu text', 'Roman-Urdu text']].to_csv(small_test_output_path, index=False, encoding='utf-8')
101
+ small_validation_set[['Urdu text', 'Roman-Urdu text']].to_csv(small_validation_output_path, index=False, encoding='utf-8')
102
+
103
+ print(f"Training, test, validation, and smaller subsets have been saved to respective CSV files.")
104
+ # Print the number of rows in each file
105
+ print(f"Number of rows in training set: {len(training_set)}")
106
+ print(f"Number of rows in test set: {len(test_set)}")
107
+ print(f"Number of rows in validation set: {len(validation_set)}")
108
+ print(f"Number of rows in small test set: {len(small_test_set)}")
109
+ print(f"Number of rows in small validation set: {len(small_validation_set)}")
small_test_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dd736f3b25909305f156ee354998803fc6b555da86846496e195be6296a39f2
3
+ size 446913
small_validation_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0dc61cac2f0f30b5556a99563aa19b2398558f65e371baca701fe8a4c44ec8e
3
+ size 441397
splitting_strategy_rur_to_ur.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Original Issue**:
2
+
3
+ The dataset comprises 6.365 million parallel sentences in Urdu and Roman-Urdu. Many Roman-Urdu sentences are just variations of the same Urdu sentence due to different transliteration styles. If we randomly split this dataset into training, validation, and test sets, there's a high chance that variations of the same Urdu sentence will appear in multiple sets. This overlap can lead to data leakage, causing the model to memorize specific sentence pairs rather than learning to generalize transliteration patterns. Consequently, evaluation metrics like BLEU scores may be artificially inflated, not accurately reflecting the model's true performance on unseen data.
4
+
5
+
6
+ **Splitting Strategy**:
7
+ To address this issue, the dataset is split into training, validation, and test sets in a way that ensures no Urdu sentence (and its variations) appears in more than one set. The strategy involves grouping sentences by unique Urdu text and carefully selecting sentences based on the number of their variations.
8
+
9
+ 1. **Load and Preprocess the Data**
10
+
11
+ Load the Dataset: Read the CSV file containing Urdu and Roman-Urdu sentence pairs into a Pandas DataFrame.
12
+ Remove Missing Entries: Drop any rows where the 'Urdu text' is missing.
13
+ Group by Urdu Sentences: Group the data by 'Urdu text' and aggregate all corresponding 'Roman-Urdu text' variations into lists.
14
+ Count Variations: Add a 'count' column representing the number of Roman-Urdu variations for each Urdu sentence.
15
+
16
+ 2. **Select Unique Sentences for Validation and Test Sets**
17
+
18
+ Validation Set:
19
+ Select 1,000 Urdu sentences that occur only once in the dataset (i.e., sentences with a 'count' of 1).
20
+ Include their corresponding Roman-Urdu text.
21
+ Test Set:
22
+ From the remaining Urdu sentences with a 'count' of 1 (excluding those in the validation set), select another 1,000 sentences.
23
+ Include their corresponding Roman-Urdu text.
24
+
25
+ 3. **Select Replicated Sentences with Variations for Validation and Test Sets**
26
+
27
+ Validation Set:
28
+ Select 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations (i.e., 'count' > 1 and 'count' ≤ 10).
29
+ Include all variations of these Urdu sentences in the validation set.
30
+ Test Set:
31
+ From the remaining Urdu sentences with 2 to 10 variations (excluding those in the validation set), select another 2,000 sentences.
32
+ Include all variations of these Urdu sentences in the test set.
33
+
34
+ 4. **Prepare the Training Set**
35
+
36
+ Exclude Test and Validation Sentences:
37
+ Remove all Urdu sentences (and their variations) present in the test and validation sets from the original dataset.
38
+ Form the Training Set:
39
+ The training set consists of all remaining Urdu sentences and their corresponding Roman-Urdu variations not included in the test or validation sets.
40
+
41
+ 5. **Create Smaller Subsets for Quick Evaluation**
42
+
43
+ Purpose: Facilitate faster testing and validation during model development.
44
+ Validation Subset:
45
+ From the unique Urdu sentences in the validation set, randomly select 1,000 sentences (they only have one variation).
46
+ From the replicated Urdu sentences in the validation set, for each Urdu sentence, randomly select only one Roman-Urdu variation.
47
+ Combine these to form a smaller validation set of 3,000 sentences.
48
+ Test Subset:
49
+ Repeat the same process for the test set to create a smaller test set of 3,000 sentences.
50
+
51
+
52
+ **Key Points**:
53
+ - No Overlap Between Sets: By excluding any Urdu sentences used in the test and validation sets from the training set, the strategy ensures no overlap, preventing data leakage.
54
+
55
+ - Inclusion of All Variations: The large test and validation sets include all variations of selected Urdu sentences to thoroughly evaluate the model's ability to handle different transliterations.
56
+
57
+ - Smaller Subsets for Efficiency: Smaller test and validation sets contain only one variation per Urdu sentence, allowing for quicker evaluations during model development without compromising the integrity of the results.
58
+
59
+ - Random Sampling with Fixed Seed: A fixed random_state (e.g., 42) is used in all random sampling steps to ensure reproducibility of the data splits.
60
+
61
+ - Balanced Evaluation: The strategy includes both unique sentences and those with multiple variations, providing a comprehensive evaluation of the model's performance across different levels of sentence frequency and complexity.
62
+
63
+ - Data Integrity Checks: After splitting, the sizes of the datasets are verified, and checks are performed to confirm that no Urdu sentences are shared between the training, validation, and test sets.
64
+
65
+ - Generalization Focus:By ensuring the model does not see any test or validation sentences during training, the evaluation metrics will accurately reflect the model's ability to generalize to unseen data.
66
+
67
+ - We also tested for checked for if the training sentences are made up entirely of (test sentences or their repetitions) and found that there were no matches. (file: Transliterate/RUP/finetuning/scripts/one_time_usage/filter_uniqueurdu_data.py)
68
+
test_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8c39f7b6e364abb6f7a0ef802a5e99794083ba8ad69e713888df471b523155b
3
+ size 1846199
train_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e35eef8de7871c70fe54b2f605bfdd64df26244fe40f4d0dc638f8cbd115f00b
3
+ size 1189435308
validation_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa4ab3c419da5b51a77caa2d1ed4779618f62440cf804fb70e0c4bc8b8957ef9
3
+ size 1825822