muxitox commited on
Commit
31e8cfa
1 Parent(s): e3dd34f

Reupload files

Browse files
Files changed (5) hide show
  1. README.md +155 -0
  2. dev.jsonl +0 -0
  3. parafraseja.py +100 -0
  4. test.jsonl +0 -0
  5. train.jsonl +0 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - CLiC-UB
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - ca
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Parafraseja
13
+ size_categories:
14
+ - ?
15
+ task_categories:
16
+ - text-classification
17
+ task_ids:
18
+ - multi-input-text-classification
19
+ ---
20
+
21
+ # Dataset Card for Parafraseja
22
+
23
+ ## Table of Contents
24
+ - [Table of Contents](#table-of-contents)
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits](#data-splits)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Annotations](#annotations)
36
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
37
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
38
+ - [Social Impact of Dataset](#social-impact-of-dataset)
39
+ - [Discussion of Biases](#discussion-of-biases)
40
+ - [Other Known Limitations](#other-known-limitations)
41
+ - [Additional Information](#additional-information)
42
+ - [Dataset Curators](#dataset-curators)
43
+ - [Licensing Information](#licensing-information)
44
+ - [Citation Information](#citation-information)
45
+ - [Contributions](#contributions)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
50
+
51
+ ### Dataset Summary
52
+
53
+ Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
54
+
55
+ ### Supported Tasks and Leaderboards
56
+
57
+ This dataset is mainly intended to train models for paraphrase detection.
58
+
59
+ ### Languages
60
+
61
+ The dataset is in Catalan (`ca-CA`).
62
+
63
+ ## Dataset Structure
64
+
65
+ The dataset consists of pairs of sentences labelled with "Parafrasis" or "No Parafrasis" in a jsonl format.
66
+
67
+ ### Data Instances
68
+
69
+ <pre>
70
+ {
71
+ "id": "te1_14977_1",
72
+ "source": "teca",
73
+ "original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
74
+ "new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
75
+ "label": "Parafrasis"
76
+ }
77
+ </pre>
78
+
79
+ ### Data Fields
80
+ - original: original sentence
81
+ - new: new sentence, which could be a paraphrase or a non-paraphrase
82
+ - label: relation between original and new
83
+
84
+ ### Data Splits
85
+
86
+ * dev.json: 2,000 examples
87
+ * test.json: 4,000 examples
88
+ * train.json: 15,984 examples
89
+
90
+ ## Dataset Creation
91
+
92
+ ### Curation Rationale
93
+
94
+ We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
95
+
96
+ ### Source Data
97
+
98
+ The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ 11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
103
+
104
+ #### Who are the source language producers?
105
+
106
+ TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
107
+
108
+ ### Annotations
109
+
110
+ The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
111
+
112
+ #### Annotation process
113
+
114
+ The annotation process was done by a single annotator and reviewed by another.
115
+
116
+ #### Who are the annotators?
117
+
118
+ The annotators were Catalan native speakers, with a background on linguistics.
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ No personal or sensitive information included.
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
129
+
130
+ ### Discussion of Biases
131
+
132
+ We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
133
+
134
+ ### Other Known Limitations
135
+
136
+ [N/A]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
143
+
144
+
145
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
146
+
147
+ ### Licensing Information
148
+
149
+ [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
150
+
151
+
152
+
153
+ ### Contributions
154
+
155
+ [N/A]
dev.jsonl ADDED
The diff for this file is too large to render. See raw diff
parafraseja.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the ReviewsFinder dataset.
2
+
3
+
4
+ import json
5
+ import csv
6
+
7
+ import datasets
8
+
9
+
10
+ logger = datasets.logging.get_logger(__name__)
11
+
12
+
13
+ _CITATION = """ """
14
+
15
+
16
+ _DESCRIPTION = """ Parafraseja is a dataset of 16,584 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available. """
17
+
18
+
19
+ _HOMEPAGE = """ https://huggingface.co/datasets/projecte-aina/Parafraseja/ """
20
+
21
+
22
+
23
+ _URL = "https://huggingface.co/datasets/projecte-aina/Parafraseja/resolve/main/"
24
+ _TRAINING_FILE = "train.jsonl"
25
+ _DEV_FILE = "dev.jsonl"
26
+ _TEST_FILE = "test.jsonl"
27
+
28
+
29
+ class ParafrasejaConfig(datasets.BuilderConfig):
30
+ """ Builder config for the Parafraseja dataset """
31
+
32
+ def __init__(self, **kwargs):
33
+ """BuilderConfig for parafrasis.
34
+ Args:
35
+ **kwargs: keyword arguments forwarded to super.
36
+ """
37
+ super(ParafrasejaConfig, self).__init__(**kwargs)
38
+
39
+
40
+ class Parafraseja(datasets.GeneratorBasedBuilder):
41
+ """ Parafrasis Dataset """
42
+
43
+
44
+ BUILDER_CONFIGS = [
45
+ ParafrasejaConfig(
46
+ name="Parafraseja",
47
+ version=datasets.Version("1.0.0"),
48
+ description="Parafraseja dataset",
49
+ ),
50
+ ]
51
+
52
+
53
+ def _info(self):
54
+ return datasets.DatasetInfo(
55
+ description=_DESCRIPTION,
56
+ features=datasets.Features(
57
+ {
58
+ "sentence1": datasets.Value("string"),
59
+ "sentence2": datasets.Value("string"),
60
+ "label": datasets.features.ClassLabel
61
+ (names=
62
+ [
63
+ "No Parafrasis",
64
+ "Parafrasis",
65
+ ]
66
+ ),
67
+ }
68
+ ),
69
+ homepage=_HOMEPAGE,
70
+ citation=_CITATION,
71
+ )
72
+
73
+
74
+ def _split_generators(self, dl_manager):
75
+ """Returns SplitGenerators."""
76
+ urls_to_download = {
77
+ "train": f"{_URL}{_TRAINING_FILE}",
78
+ "dev": f"{_URL}{_DEV_FILE}",
79
+ "test": f"{_URL}{_TEST_FILE}",
80
+ }
81
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
82
+
83
+ return [
84
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
85
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
86
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
87
+ ]
88
+
89
+
90
+ def _generate_examples(self, filepath):
91
+ """This function returns the examples in the raw (text) form."""
92
+ logger.info("generating examples from = %s", filepath)
93
+ with open(filepath, encoding="utf-8") as f:
94
+ data = [json.loads(line) for line in f]
95
+ for id_, article in enumerate(data):
96
+ yield id_, {
97
+ "sentence1": article['original'],
98
+ "sentence2": article['new'],
99
+ "label": article['label'],
100
+ }
test.jsonl ADDED
The diff for this file is too large to render. See raw diff
train.jsonl ADDED
The diff for this file is too large to render. See raw diff