fginter commited on
Commit
365c356
1 Parent(s): 1c43d87

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ YAML tags:
3
+ - copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
4
+ ---
5
+
6
+ # Dataset Card for [Dataset Name]
7
+
8
+ ## Table of Contents
9
+ - [Table of Contents](#table-of-contents)
10
+ - [Dataset Description](#dataset-description)
11
+ - [Dataset Summary](#dataset-summary)
12
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
13
+ - [Languages](#languages)
14
+ - [Dataset Structure](#dataset-structure)
15
+ - [Data Instances](#data-instances)
16
+ - [Data Fields](#data-fields)
17
+ - [Data Splits](#data-splits)
18
+ - [Dataset Creation](#dataset-creation)
19
+ - [Curation Rationale](#curation-rationale)
20
+ - [Source Data](#source-data)
21
+ - [Annotations](#annotations)
22
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
23
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
24
+ - [Social Impact of Dataset](#social-impact-of-dataset)
25
+ - [Discussion of Biases](#discussion-of-biases)
26
+ - [Other Known Limitations](#other-known-limitations)
27
+ - [Additional Information](#additional-information)
28
+ - [Dataset Curators](#dataset-curators)
29
+ - [Licensing Information](#licensing-information)
30
+ - [Citation Information](#citation-information)
31
+ - [Contributions](#contributions)
32
+
33
+ ## Dataset Description
34
+
35
+ - **Homepage:** https://turkunlp.org/paraphrase.html
36
+ - **Repository:** https://github.com/TurkuNLP/Turku-paraphrase-corpus
37
+ - **Paper:** https://aclanthology.org/2021.nodalida-main.29
38
+ - **Leaderboard:**
39
+ - **Point of Contact:** jmnybl@utu.fi, filip.ginter@gmail.com
40
+
41
+ ### Dataset Summary
42
+
43
+ The project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.
44
+
45
+ ### Supported Tasks and Leaderboards
46
+
47
+ * Paraphrase classification
48
+ * Paraphrase generation
49
+
50
+ ### Languages
51
+
52
+ Finnish
53
+
54
+ ## Dataset Structure
55
+
56
+ ### Data Instances
57
+
58
+ [More Information Needed]
59
+
60
+ ### Data Fields
61
+
62
+ The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: `plain`, `classification`, and `generation`. The `plain` configuration loads the original data without any additional preprocessing or transformations, while the `classification` configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the `generation` configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
63
+
64
+ Each pair in the `plain` and `classification` configurations will include fields:
65
+ `id`: Identifier of the paraphrase pair (string)
66
+ `gem_id`: Identifier of the paraphrase pair in the GEM dataset (string)
67
+ `goeswith`: Identifier of the document from which the paraphrase was extracted, can be `not available` in case the source of the paraphrase is not from document-structured data. All examples with the same `goeswith` value (other than `not available`) should be kept together in any train/dev/test split; most users won't need this (string)
68
+ `fold`: 0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)
69
+ `text1`: First paraphrase passage (string)
70
+ `text2`: Second paraphrase passage (string)
71
+ `label`: Manually annotated labels (string)
72
+ `binary_label`: Label turned into binary with values `positive` (paraphrase) and `negative` (not-paraphrase) (string)
73
+ `is_rewrite`: Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
74
+
75
+ Each pair in the `generation` config will include the same fields except `text1` and `text2` are renamed to `input` and `output` in order to indicate the generation direction. Thus the fields are: `id`, `gem_id`, `goeswith`, `fold`, `input`, `output`, `label`, `binary_label`, and `is_rewrite`
76
+
77
+ **Context**: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations `plain-context` and `classification-context`. These are exactly like `plain` and `classification` with these additional fields:
78
+
79
+ `context1`: a dictionary with the fields `doctext` (string), `begin` (int), `end` (int). These mean that the paraphrase in `text1` was extracted from `doctext[begin:end]`. In most cases, `doctext[begin:end]` and `text1` are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were "cleaned" from `text1` during annotation. In case the context is not available, `doctext` is an empty string and `beg==end==0`
80
+ `context2`: same as `context1` but for `text2`
81
+
82
+ ### Data Splits
83
+
84
+ [More Information Needed]
85
+
86
+ ## Dataset Creation
87
+
88
+ ### Curation Rationale
89
+
90
+ [More Information Needed]
91
+
92
+ ### Source Data
93
+
94
+ #### Initial Data Collection and Normalization
95
+
96
+ [More Information Needed]
97
+
98
+ #### Who are the source language producers?
99
+
100
+ [More Information Needed]
101
+
102
+ ### Annotations
103
+
104
+ #### Annotation process
105
+
106
+ [More Information Needed]
107
+
108
+ #### Who are the annotators?
109
+
110
+ [More Information Needed]
111
+
112
+ ### Personal and Sensitive Information
113
+
114
+ [More Information Needed]
115
+
116
+ ## Considerations for Using the Data
117
+
118
+ ### Social Impact of Dataset
119
+
120
+ [More Information Needed]
121
+
122
+ ### Discussion of Biases
123
+
124
+ [More Information Needed]
125
+
126
+ ### Other Known Limitations
127
+
128
+ [More Information Needed]
129
+
130
+ ## Additional Information
131
+
132
+ ### Dataset Curators
133
+
134
+ [More Information Needed]
135
+
136
+ ### Licensing Information
137
+
138
+ [More Information Needed]
139
+
140
+ ### Citation Information
141
+
142
+ [More Information Needed]
143
+
144
+ ### Contributions
145
+
146
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.