Datasets:

ArXiv:
Tags:
License:
soldni commited on
Commit
9cb7ebc
1 Parent(s): f03bdd9

updated readme

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -5,18 +5,18 @@ license: apache-2.0
5
 
6
  # CSAbstruct
7
 
8
- CSAbstruct was created as part of ["Pretrained Language Models for Sequential Sentence Classification"][1].
9
 
10
- It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][2] categories.
11
 
12
 
13
  ## Dataset Construction Details
14
 
15
  CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
16
- The key difference between this dataset and [PUBMED-RCT][2] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
17
  Therefore, there is more variety in writing styles in CSAbstruct.
18
- CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et al., 2018)][3].
19
- Each sentence is annotated by 5 workers on the [Figure-eight platform][4], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
20
 
21
  We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
22
  Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
@@ -24,7 +24,7 @@ The annotations are aggregated using the agreement on a single sentence weighted
24
  A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
25
  We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
26
  Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
27
- Compared with [PUBMED-RCT][2], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
28
 
29
  ## Dataset Statistics
30
 
@@ -54,7 +54,9 @@ If you use this dataset, please cite the following paper:
54
  }
55
  ```
56
 
57
- [1]: https://aclanthology.org/D19-1383
58
- [2]: https://arxiv.org/abs/1710.06071
59
- [3]: https://aclanthology.org/N18-3011/
60
- [4]: https://www.figure-eight.com/
 
 
 
5
 
6
  # CSAbstruct
7
 
8
+ CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
9
 
10
+ It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
11
 
12
 
13
  ## Dataset Construction Details
14
 
15
  CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
16
+ The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
17
  Therefore, there is more variety in writing styles in CSAbstruct.
18
+ CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
19
+ E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
20
 
21
  We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
22
  Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
 
24
  A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
25
  We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
26
  Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
27
+ Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
28
 
29
  ## Dataset Statistics
30
 
 
54
  }
55
  ```
56
 
57
+ [1]: https://arxiv.org/abs/1909.04054
58
+ [2]: https://aclanthology.org/D19-1383
59
+ [3]: https://arxiv.org/abs/1710.06071
60
+ [4]: https://aclanthology.org/N18-3011/
61
+ [5]: https://www.figure-eight.com/
62
+ [6]: https://github.com/allenai/sequential_sentence_classification