parquet-converter commited on
Commit
578fe5b
·
1 Parent(s): 56645be

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,29 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- train.sn filter=lfs diff=lfs merge=lfs -text
29
- train.en filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train.en.csv → Itihasa/itihasa-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b19bc0213bfec0660add7e0061fa4f0e76041ce3957994890d055103121ca22
3
- size 13525355
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6945355a012076f1d6b7700d4a606943cf1df515b70745c9310c153f928d14da
3
+ size 2614907
train.sn.csv → Itihasa/itihasa-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:41ddcb4585d8c4d03cc9492573f734183cd19a70cf500b0c28375507625a87f8
3
- size 20553827
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:140249e686510bbcc213c82ea249b50c255744b91a056fddde1cf56be9b6609e
3
+ size 16377952
Itihasa/itihasa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa4938079c8079c7be749f440f332eac52a6a5a8b2ad6a18cc65c757df652960
3
+ size 1393907
README.md DELETED
@@ -1,81 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - sa
8
- - en
9
- license:
10
- - apache-2.0
11
- multilinguality:
12
- - translation
13
- size_categories:
14
- - unknown
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text2text-generation
19
- task_ids: []
20
- pretty_name: Itihasa
21
- metrics:
22
- - bleu
23
- - sacrebleu
24
- - rouge
25
- - ter
26
- - chrF
27
- tags:
28
- - conditional-text-generation
29
- ---
30
-
31
- # Itihāsa
32
-
33
- Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. The paper which introduced this dataset can be found [here](https://aclanthology.org/2021.wat-1.22/).
34
-
35
- This repository contains the randomized train, development, and test sets. The original extracted data can be found [here](https://github.com/rahular/itihasa/tree/gh-pages/res) in JSON format. If you just want to browse the data, you can go [here](http://rahular.com/itihasa/).
36
-
37
- ## Usage
38
- ```
39
- >> from datasets import load_dataset
40
- >> dataset = load_dataset("rahular/itihasa")
41
- >> dataset
42
- DatasetDict({
43
- train: Dataset({
44
- features: ['translation'],
45
- num_rows: 75162
46
- })
47
- validation: Dataset({
48
- features: ['translation'],
49
- num_rows: 6149
50
- })
51
- test: Dataset({
52
- features: ['translation'],
53
- num_rows: 11722
54
- })
55
- })
56
-
57
- >> dataset['train'][0]
58
- {'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',
59
- 'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}
60
- ```
61
-
62
-
63
- ## Citation
64
- If you found this dataset to be useful, please consider citing the paper as follows:
65
- ```
66
- @inproceedings{aralikatte-etal-2021-itihasa,
67
- title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
68
- author = "Aralikatte, Rahul and
69
- de Lhoneux, Miryam and
70
- Kunchukuttan, Anoop and
71
- S{\o}gaard, Anders",
72
- booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
73
- month = aug,
74
- year = "2021",
75
- address = "Online",
76
- publisher = "Association for Computational Linguistics",
77
- url = "https://aclanthology.org/2021.wat-1.22",
78
- pages = "191--197",
79
- abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
80
- }
81
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dev.en.csv DELETED
The diff for this file is too large to render. See raw diff
 
dev.sn.csv DELETED
The diff for this file is too large to render. See raw diff
 
itihasa.py DELETED
@@ -1,124 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Itihasa Corpus."""
18
-
19
-
20
- import collections
21
-
22
- import datasets
23
-
24
-
25
- _DESCRIPTION = """\
26
- A Sanskrit-English machine translation dataset.
27
- """
28
-
29
- _CITATION = """\
30
- @inproceedings{aralikatte-etal-2021-itihasa,
31
- title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
32
- author = "Aralikatte, Rahul and
33
- de Lhoneux, Miryam and
34
- Kunchukuttan, Anoop and
35
- S{\o}gaard, Anders",
36
- booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
37
- month = aug,
38
- year = "2021",
39
- address = "Online",
40
- publisher = "Association for Computational Linguistics",
41
- url = "https://aclanthology.org/2021.wat-1.22",
42
- pages = "191--197",
43
- abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
44
- }
45
- """
46
-
47
- _DATA_URL = "https://github.com/rahular/itihasa/archive/refs/heads/main.zip"
48
-
49
- # Tuple that describes a single pair of files with matching translations.
50
- # language_to_file is the map from language (2 letter string: example 'en')
51
- # to the file path in the extracted directory.
52
- TranslateData = collections.namedtuple("TranslateData", ["url", "language_to_file"])
53
-
54
-
55
- class ItihasaConfig(datasets.BuilderConfig):
56
- """BuilderConfig for Itihasa."""
57
-
58
- def __init__(self, **kwargs):
59
- """BuilderConfig for Itihasa."""
60
- super(ItihasaConfig, self).__init__(
61
- name="Itihasa",
62
- description=_DESCRIPTION,
63
- version=datasets.Version("1.0.0", ""),
64
- **kwargs,
65
- )
66
-
67
-
68
- class Itihasa(datasets.GeneratorBasedBuilder):
69
- """Itihasa machine translation dataset."""
70
-
71
- BUILDER_CONFIGS = [
72
- ItihasaConfig()
73
- ]
74
-
75
- def _info(self):
76
- return datasets.DatasetInfo(
77
- description=_DESCRIPTION,
78
- features=datasets.Features(
79
- {"translation": datasets.features.Translation(languages=("sn", "en"))}
80
- ),
81
- supervised_keys=("sn", "en"),
82
- homepage="http://www.rahular.com/itihasa/",
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- dl_dir = dl_manager.download_and_extract(_DATA_URL)
88
-
89
- source, target = "sn", "en"
90
- path_tmpl = "{dl_dir}/itihasa-main/data/{split}.{lang}"
91
-
92
- files = {}
93
- for split in ("train", "dev", "test"):
94
- files[split] = {
95
- "source_file": path_tmpl.format(dl_dir=dl_dir, split=split, lang=source),
96
- "target_file": path_tmpl.format(dl_dir=dl_dir, split=split, lang=target),
97
- }
98
-
99
- return [
100
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs=files["train"]),
101
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs=files["dev"]),
102
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs=files["test"]),
103
- ]
104
-
105
- def _generate_examples(self, source_file, target_file):
106
- """This function returns the examples in the raw (text) form."""
107
- with open(source_file, encoding="utf-8") as f:
108
- source_sentences = f.read().split("\n")
109
- with open(target_file, encoding="utf-8") as f:
110
- target_sentences = f.read().split("\n")
111
-
112
- assert len(target_sentences) == len(source_sentences), "Sizes do not match: %d vs %d for %s vs %s." % (
113
- len(source_sentences),
114
- len(target_sentences),
115
- source_file,
116
- target_file,
117
- )
118
-
119
- source, target = "sn", "en"
120
- for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
121
- result = {"translation": {source: l1, target: l2}}
122
- # Make sure that both translations are non-empty.
123
- if all(result.values()):
124
- yield idx, result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.en.csv DELETED
The diff for this file is too large to render. See raw diff
 
test.sn.csv DELETED
The diff for this file is too large to render. See raw diff