Datasets:
KazParC
Kazakh Parallel Corpus (KazParC) is a parallel corpus designed for machine translation across Kazakh, English, Russian, and Turkish. The first and largest publicly available corpus of its kind, KazParC contains a collection of 372,164 parallel sentences covering different domains and developed with the assistance of human translators.
Data Sources and Domains
The data sources include
- proverbs and sayings
- terminology glossaries
- phrasebooks
- literary works
- periodicals
- language learning materials, including the SCoRE corpus by Chujo et al. (2015)
- educational video subtitle collections, such as QED by Abdelali et al. (2014)
- news items, such as KazNERD (Yeshpanov et al., 2022) and WMT (Tiedemann, 2012)
- TED talks
- governmental and regulatory legal documents from Kazakhstan
- communications from the official website of the President of the Republic of Kazakhstan
- United Nations publications
- image captions from sources like COCO
The sources are categorised into five broad domains:
Domain | lines | tokens | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
EN | KK | RU | TR | |||||||
# | % | # | % | # | % | # | % | # | % | |
Mass media | 120,547 | 32.4 | 1,817,276 | 28.3 | 1,340,346 | 28.6 | 1,454,430 | 29.0 | 1,311,985 | 28.5 |
General | 94,988 | 25.5 | 844,541 | 13.1 | 578,236 | 12.3 | 618,960 | 12.3 | 608,020 | 13.2 |
Legal documents | 77,183 | 20.8 | 2,650,626 | 41.3 | 1,925,561 | 41.0 | 1,991,222 | 39.7 | 1,880,081 | 40.8 |
Education and science | 46,252 | 12.4 | 522,830 | 8.1 | 392,348 | 8.4 | 444,786 | 8.9 | 376,484 | 8.2 |
Fiction | 32,932 | 8.9 | 589,001 | 9.2 | 456,385 | 9.7 | 510,168 | 10.2 | 433,968 | 9.4 |
Total | 371,902 | 100 | 6,424,274 | 100 | 4,692,876 | 100 | 5,019,566 | 100 | 4,610,538 | 100 |
Pair | # lines | # sents | # tokens | # types |
---|---|---|---|---|
KKβEN | 363,594 | 362,230 361,087 |
4,670,789 6,393,381 |
184,258 59,062 |
KKβRU | 363,482 | 362,230 362,748 |
4,670,593 4,996,031 |
184,258 183,204 |
KKβTR | 362,150 | 362,230 361,660 |
4,668,852 4,586,421 |
184,258 175,145 |
ENβRU | 363,456 | 361,087 362,748 |
6,392,301 4,994,310 |
59,062 183,204 |
ENβTR | 362,392 | 361,087 361,660 |
6,380,703 4,579,375 |
59,062 175,145 |
RUβTR | 363,324 | 362,748 361,660 |
4,999,850 4,591,847 |
183,204 175,145 |
Synthetic Corpus
To make our parallel corpus more extensive, we carried out web crawling to gather a total of 1,797,066 sentences from English-language websites. These sentences were then automatically translated into Kazakh, Russian, and Turkish using the Google Translate service. We refer to this collection of data as 'SynC' (Synthetic Corpus).
Pair | # lines | # sents | # tokens | # types |
---|---|---|---|---|
KKβEN | 1,787,050 | 1,782,192 1,781,019 |
26,630,960 35,291,705 |
685,135 300,556 |
KKβRU | 1,787,448 | 1,782,192 1,777,500 |
26,654,195 30,241,895 |
685,135 672,146 |
KKβTR | 1,791,425 | 1,782,192 1,782,257 |
26,726,439 27,865,860 |
685,135 656,294 |
ENβRU | 1,784,513 | 1,781,019 1,777,500 |
35,244,800 30,175,611 |
300,556 672,146 |
ENβTR | 1,788,564 | 1,781,019 1,782,257 |
35,344,188 27,806,708 |
300,556 656,294 |
RUβTR | 1,788,027 | 1,777,500 1,782,257 |
30,269,083 27,816,210 |
672,146 656,294 |
Data Splits
KazParC
We first created a test set by randomly selecting 250 unique and non-repeating rows from each of the sources outlined in Data Sources and Domains. The remaining data were divided into language pairs, following an 80/20 split, while ensuring that the distribution of domains was maintained within both the training and validation sets.
Pair | Train | Valid | Test | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# lines |
# sents |
# tokens |
# types |
# lines |
# sents |
# tokens |
# lines |
# lines |
# sents |
# tokens |
# lines |
|
KKβEN | 290,877 | 286,958 286,197 |
3,693,263 5,057,687 |
164,766 54,311 |
72,719 | 72,426 72,403 |
920,482 1,259,827 |
83,057 32,063 |
4,750 | 4,750 4,750 |
57,044 75,867 |
17,475 9,729 |
KKβRU | 290,785 | 286,943 287,215 |
3,689,799 3,945,741 |
164,995 165,882 |
72,697 | 72,413 72,439 |
923,750 988,374 |
82,958 87,519 |
4,750 | 4,750 4,750 |
57,044 61,916 |
17,475 18,804 |
KKβTR | 289,720 | 286,694 286,279 |
3,691,751 3,626,361 |
164,961 157,460 |
72,430 | 72,211 72,190 |
920,057 904,199 |
82,698 80,885 |
4,750 | 4,750 4,750 |
57,044 55,861 |
17,475 17,284 |
ENβRU | 290,764 | 286,185 287,261 |
5,058,530 3,950,362 |
54,322 165,701 |
72,692 | 72,377 72,427 |
1,257,904 982,032 |
32,208 87,541 |
4,750 | 4,750 4,750 |
75,867 61,916 |
9,729 18,804 |
ENβTR | 289,913 | 285,967 286,288 |
5,048,274 3,621,531 |
54,224 157,369 |
72,479 | 72,220 72,219 |
1,256,562 901,983 |
32,269 80,838 |
4,750 | 4,750 4,750 |
75,867 55,861 |
9,729 17,284 |
RUβTR | 290,899 | 287,241 286,475 |
3,947,809 3,626,436 |
165,482 157,470 |
72,725 | 72,455 72,362 |
990,125 909,550 |
87,831 80,962 |
4,750 | 4,750 4,750 |
61,916 55,861 |
18,804 17,284 |
SynC
We divided the synthetic corpus into training and validation sets with a 90/10 ratio.
Pair | Train | Valid | ||||||
---|---|---|---|---|---|---|---|---|
# lines | # sents | # tokens | # types | # lines | # sents | # tokens | # types | |
KKβEN | 1,608,345 | 1,604,414 1,603,426 |
23,970,260 31,767,617 |
650,144 286,372 |
178,705 | 178,654 178,639 |
2,660,700 3,524,088 |
208,838 105,517 |
KKβRU | 1,608,703 | 1,604,468 1,600,643 |
23,992,148 27,221,583 |
650,170 642,604 |
178,745 | 178,691 178,642 |
2,662,047 3,020,312 |
209,188 235,642 |
KKβTR | 1,612,282 | 1,604,793 1,604,822 |
24,053,671 25,078,688 |
650,384 626,724 |
179,143 | 179,057 179,057 |
2,672,768 2,787,172 |
209,549 221,773 |
ENβRU | 1,606,061 | 1,603,199 1,600,372 |
31,719,781 27,158,101 |
286,645 642,686 |
178,452 | 178,419 178,379 |
3,525,019 3,017,510 |
104,834 235,069 |
ENβTR | 1,609,707 | 1,603,636 1,604,545 |
31,805,393 25,022,782 |
286,387 626,740 |
178,857 | 178,775 178,796 |
3,538,795 2,783,926 |
105,641 221,372 |
RUβTR | 1,609,224 | 1,600,605 1,604,521 |
27,243,278 25,035,274 |
642,797 626,587 |
178,803 | 178,695 178,750 |
3,025,805 2,780,936 |
235,970 221,792 |
Corpus Structure
The entire corpus is organised into two distinct groups based on their file prefixes. Files "01" through "19" have the "kazparc" prefix, while Files "20" to "32" have the "sync" prefix.
βββ kazparc
βββ 01_kazparc_all_entries.csv
βββ 02_kazparc_train_kk_en.csv
βββ 03_kazparc_train_kk_ru.csv
βββ 04_kazparc_train_kk_tr.csv
βββ 05_kazparc_train_en_ru.csv
βββ 06_kazparc_train_en_tr.csv
βββ 07_kazparc_train_ru_tr.csv
βββ 08_kazparc_valid_kk_en.csv
βββ 09_kazparc_valid_kk_ru.csv
βββ 10_kazparc_valid_kk_tr.csv
βββ 11_kazparc_valid_en_ru.csv
βββ 12_kazparc_valid_en_tr.csv
βββ 13_kazparc_valid_ru_tr.csv
βββ 14_kazparc_test_kk_en.csv
βββ 15_kazparc_test_kk_ru.csv
βββ 16_kazparc_test_kk_tr.csv
βββ 17_kazparc_test_en_ru.csv
βββ 18_kazparc_test_en_tr.csv
βββ 19_kazparc_test_ru_tr.csv
βββ sync
βββ 20_sync_all_entries.csv
βββ 21_sync_train_kk_en.csv
βββ 22_sync_train_kk_ru.csv
βββ 23_sync_train_kk_tr.csv
βββ 24_sync_train_en_ru.csv
βββ 25_sync_train_en_tr.csv
βββ 26_sync_train_ru_tr.csv
βββ 27_sync_valid_kk_en.csv
βββ 28_sync_valid_kk_ru.csv
βββ 29_sync_valid_kk_tr.csv
βββ 30_sync_valid_en_ru.csv
βββ 31_sync_valid_en_tr.csv
βββ 32_sync_valid_ru_tr.csv
KazParC files
- File "01" contains the original, unprocessed text data for the four languages considered within KazParC.
- Files "02" through "19" represent pre-processed texts divided into language pairs for training (Files "02" to "07"), validation (Files "08" to "13"), and testing (Files "14" to "19"). Language pairs are indicated within the filenames using two-letter language codes (e.g., kk_en).
SynC files
- File "20" contains raw, unprocessed text data for the four languages.
- Files "21" to "32" contain pre-processed text divided into language pairs for training (Files "21" to "26") and validation (Files "27" to "32") purposes.
Data Fields
In both "01" and "20", each line consists of specific components:
id
: the unique line identifierkk
: the sentence in Kazakhen
: the sentence in Englishru
: the sentence in Russiantr
: the sentence in Turkishdomain
: the domain of the sentence
For the other files, the fields are:
id
: the unique line identifiersource_lang
: the source language codetarget_lang
: the target language codedomain
: the domain of the sentencepair
: the language pair
How to Use
To load the subsets of KazParC separately:
from datasets import load_dataset
kazparc_raw = load_dataset("issai/kazparc", "kazparc_raw")
kazparc = load_dataset("issai/kazparc", "kazparc")
sync_raw = load_dataset("issai/kazparc", "sync_raw")
sync = load_dataset("issai/kazparc", "sync")
- Downloads last month
- 70