Update README.md
Browse files
README.md
CHANGED
@@ -1,17 +1,56 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
We provide sentential paraphrase detection train, test datasets as well as BERT-based models for the Armenian language.
|
4 |
|
5 |
-
|
|
|
|
|
|
|
|
|
6 |
|
7 |
-
|
8 |
|
9 |
|Number of examples|Total|Paraphrase|Non-paraphrase (near paraphrase)|
|
10 |
|:-- | :---: | :---: | :---: |
|
11 |
|Train | 4233 |1339 |2683 (211) |
|
12 |
|Test | 1682 |1021 |448 (213) |
|
13 |
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
|BERT Model | Train set | F1 | Acc. |
|
17 |
|:-- | :---: | :---: | :---: |
|
@@ -20,6 +59,10 @@ We finetuned Multilingual BERT on several training sets, including the proposed
|
|
20 |
|Multilingual BERT | MRPC train set machine-translated into Armenian | 80.07 | 69.87 |
|
21 |
|Multilingual BERT | All of the above combined | 84 |77.6 |
|
22 |
|
|
|
|
|
|
|
23 |
The model trained on ARPA is available for use, and can be downloaded using this [link](https://drive.google.com/uc?id=14owW5kkZ1JiNa6P-676e-QIj8m8i5e_8).
|
24 |
|
25 |
For more details about the models and dataset construction, refer to the [paper](https://arxiv.org/pdf/2009.12615).
|
|
|
|
1 |
+
---
|
2 |
+
YAML tags:
|
3 |
+
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
|
4 |
+
---
|
5 |
+
|
6 |
+
# Dataset Card for [Dataset Name]
|
7 |
+
|
8 |
+
## Table of Contents
|
9 |
+
- [Table of Contents](#table-of-contents)
|
10 |
+
- [Dataset Description](#dataset-description)
|
11 |
+
- [Dataset Summary](#dataset-summary)
|
12 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
13 |
+
- [Languages](#languages)
|
14 |
+
- [Dataset Structure](#dataset-structure)
|
15 |
+
- [Data Instances](#data-instances)
|
16 |
+
- [Data Fields](#data-fields)
|
17 |
+
- [Data Splits](#data-splits)
|
18 |
+
- [Dataset Creation](#dataset-creation)
|
19 |
+
- [Curation Rationale](#curation-rationale)
|
20 |
+
- [Source Data](#source-data)
|
21 |
+
- [Annotations](#annotations)
|
22 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
23 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
24 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
25 |
+
- [Discussion of Biases](#discussion-of-biases)
|
26 |
+
- [Other Known Limitations](#other-known-limitations)
|
27 |
+
- [Additional Information](#additional-information)
|
28 |
+
- [Dataset Curators](#dataset-curators)
|
29 |
+
- [Licensing Information](#licensing-information)
|
30 |
+
- [Citation Information](#citation-information)
|
31 |
+
- [Contributions](#contributions)
|
32 |
+
|
33 |
+
## Dataset Description
|
34 |
|
35 |
We provide sentential paraphrase detection train, test datasets as well as BERT-based models for the Armenian language.
|
36 |
|
37 |
+
### Dataset Summary
|
38 |
+
|
39 |
+
The sentences in the dataset are taken from [Hetq](https://hetq.am/) and [Panarmenian](http://www.panarmenian.net/) news articles. To generate paraphrase for the sentences, we used back translation from Armenian to English. We repeated the step twice, after which the generated paraphrases were manually reviewed. Invalid sentences were filtered out, while the rest were labelled as either paraphrase, near paraphrase or non-paraphrase. Test examples were reviewed by 3 different annotators. In addition, to increase the number of non-paraphrase pairs, we padded the dataset with automatically generated negative examples, including pairs of consecutive sentences and random pairs.
|
40 |
+
|
41 |
+
## Dataset Structure
|
42 |
|
43 |
+
Each row consists of 2 sentences and their label. This sentences were labelled as either paraphrase, near paraphrase or non-paraphrase (with 1, 0, -1 labels respectively). The sentences are divided into train and test sets.
|
44 |
|
45 |
|Number of examples|Total|Paraphrase|Non-paraphrase (near paraphrase)|
|
46 |
|:-- | :---: | :---: | :---: |
|
47 |
|Train | 4233 |1339 |2683 (211) |
|
48 |
|Test | 1682 |1021 |448 (213) |
|
49 |
|
50 |
+
### Dataset Evaluation
|
51 |
+
|
52 |
+
We finetuned Multilingual BERT on several training sets, including the proposed ARPA dataset, and evaluated their performance on our test set. During training and
|
53 |
+
evaluation, near paraphrase and non-paraphrase pairs were combined into one class. The results are provided below:
|
54 |
|
55 |
|BERT Model | Train set | F1 | Acc. |
|
56 |
|:-- | :---: | :---: | :---: |
|
|
|
59 |
|Multilingual BERT | MRPC train set machine-translated into Armenian | 80.07 | 69.87 |
|
60 |
|Multilingual BERT | All of the above combined | 84 |77.6 |
|
61 |
|
62 |
+
|
63 |
+
#### Additional Information
|
64 |
+
|
65 |
The model trained on ARPA is available for use, and can be downloaded using this [link](https://drive.google.com/uc?id=14owW5kkZ1JiNa6P-676e-QIj8m8i5e_8).
|
66 |
|
67 |
For more details about the models and dataset construction, refer to the [paper](https://arxiv.org/pdf/2009.12615).
|
68 |
+
|