vkpriya commited on
Commit
50a0688
1 Parent(s): d6adb9f

add other links

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -17,7 +17,7 @@ size_categories:
17
 
18
  The dataset consists of 5500 English sentence pairs that are scored and ranked on a relatedness scale ranging from 0 (least related) to 1 (most related).
19
 
20
- ### Loading the Dataset
21
  - The sentence pairs, and associated scores, are in the file sem_text_rel_ranked.csv in the root directory. The CSV file can be read using:
22
 
23
  ```python
@@ -40,19 +40,19 @@ The dataset consists of 5500 English sentence pairs that are scored and ranked o
40
  - The SubsetID column indicates the sampling strategy used for the source dataset
41
  - and the PairID is a unique identifier for each pair that also indicates its Source and Subset.
42
 
43
- ### Why Semantic Relatedness?
44
  Closeness of meaning can be of two kinds: semantic relatedness and semantic similarity. Two sentences are considered semantically similar when they have a paraphrasal or entailment relation, whereas relatedness accounts for all of the commonalities that can exist between two sentences. Semantic relatedness is central to textual coherence and narrative structure. Automatically determining semantic relatedness has many applications such as question answering, plagiarism detection, text generation (say in personal assistants and chat bots), and summarization.
45
 
46
  Prior NLP work has focused on semantic similarity (a small subset of semantic relatedness), largely because of a dearth of datasets. In this paper, we present the first manually annotated dataset of sentence--sentence semantic relatedness. It includes fine-grained scores of relatedness from 0 (least related) to 1 (most related) for 5,500 English sentence pairs. The sentences are taken from diverse sources and thus also have diverse sentence structures, varying amounts of lexical overlap, and varying formality.
47
 
48
- ### Comparative Annotations and Best-Worst Scaling
49
  Most existing sentence-sentence similarity datasets were annotated, one item at a time, using coarse rating labels such as integer values between 1 and 5\@ representing coarse degrees of closeness. It is well documented that such approaches suffer from inter- and intra-annotator inconsistency, scale region bias, and issues arising due to the fixed granularity.
50
 
51
  The relatedness scores for our dataset were, instead, obtained using a __comparative__ annotation schema. In comparative annotations, two (or more) items are presented together and the annotator has to determine which is greater with respect to the metric of interest.
52
 
53
  Specifically, we use Best-Worst Scaling, a comparative annotation method}, which has been shown to produce reliable scores with fewer annotations in other NLP tasks. We use scripts from https://saifmohammad.com/WebPages/BestWorst.html to obtain relatedness scores from our annotations.
54
 
55
- ### Citing our work
56
  Please use the following BibTex entry to cite us if you use our dataset or any of the [associated analyses](https://arxiv.org/abs/2110.04845):
57
 
58
  ```
@@ -66,9 +66,13 @@ Please use the following BibTex entry to cite us if you use our dataset or any o
66
  }
67
  ```
68
 
69
- ### Ethics Statement
70
  Any dataset of semantic relatedness entails several ethical considerations. We talk about this in Section 10 of [our paper](https://arxiv.org/abs/2110.04845).
71
 
 
 
 
 
72
  ## Creators
73
  - [Mohamed Abdalla](https://www.cs.toronto.edu/~msa/index_all.html) (University of Toronto)
74
  - [Krishnapriya Vishnubhotla](https://priya22.github.io/) (University of Toronto)
 
17
 
18
  The dataset consists of 5500 English sentence pairs that are scored and ranked on a relatedness scale ranging from 0 (least related) to 1 (most related).
19
 
20
+ ## Loading the Dataset
21
  - The sentence pairs, and associated scores, are in the file sem_text_rel_ranked.csv in the root directory. The CSV file can be read using:
22
 
23
  ```python
 
40
  - The SubsetID column indicates the sampling strategy used for the source dataset
41
  - and the PairID is a unique identifier for each pair that also indicates its Source and Subset.
42
 
43
+ ## Why Semantic Relatedness?
44
  Closeness of meaning can be of two kinds: semantic relatedness and semantic similarity. Two sentences are considered semantically similar when they have a paraphrasal or entailment relation, whereas relatedness accounts for all of the commonalities that can exist between two sentences. Semantic relatedness is central to textual coherence and narrative structure. Automatically determining semantic relatedness has many applications such as question answering, plagiarism detection, text generation (say in personal assistants and chat bots), and summarization.
45
 
46
  Prior NLP work has focused on semantic similarity (a small subset of semantic relatedness), largely because of a dearth of datasets. In this paper, we present the first manually annotated dataset of sentence--sentence semantic relatedness. It includes fine-grained scores of relatedness from 0 (least related) to 1 (most related) for 5,500 English sentence pairs. The sentences are taken from diverse sources and thus also have diverse sentence structures, varying amounts of lexical overlap, and varying formality.
47
 
48
+ ## Comparative Annotations and Best-Worst Scaling
49
  Most existing sentence-sentence similarity datasets were annotated, one item at a time, using coarse rating labels such as integer values between 1 and 5\@ representing coarse degrees of closeness. It is well documented that such approaches suffer from inter- and intra-annotator inconsistency, scale region bias, and issues arising due to the fixed granularity.
50
 
51
  The relatedness scores for our dataset were, instead, obtained using a __comparative__ annotation schema. In comparative annotations, two (or more) items are presented together and the annotator has to determine which is greater with respect to the metric of interest.
52
 
53
  Specifically, we use Best-Worst Scaling, a comparative annotation method}, which has been shown to produce reliable scores with fewer annotations in other NLP tasks. We use scripts from https://saifmohammad.com/WebPages/BestWorst.html to obtain relatedness scores from our annotations.
54
 
55
+ ## Citing our work
56
  Please use the following BibTex entry to cite us if you use our dataset or any of the [associated analyses](https://arxiv.org/abs/2110.04845):
57
 
58
  ```
 
66
  }
67
  ```
68
 
69
+ ## Ethics Statement
70
  Any dataset of semantic relatedness entails several ethical considerations. We talk about this in Section 10 of [our paper](https://arxiv.org/abs/2110.04845).
71
 
72
+ ## Relevant Links
73
+ - [GitHub repository](https://github.com/Priya22/semantic-textual-relatedness)
74
+ - [Zenodo page](https://zenodo.org/record/7599667)
75
+
76
  ## Creators
77
  - [Mohamed Abdalla](https://www.cs.toronto.edu/~msa/index_all.html) (University of Toronto)
78
  - [Krishnapriya Vishnubhotla](https://priya22.github.io/) (University of Toronto)