Datasets:

ArXiv:
Tags:
License:
parquet-converter commited on
Commit
fdc1599
1 Parent(s): 82e266d

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,53 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
35
- # Audio files - uncompressed
36
- *.pcm filter=lfs diff=lfs merge=lfs -text
37
- *.sam filter=lfs diff=lfs merge=lfs -text
38
- *.raw filter=lfs diff=lfs merge=lfs -text
39
- # Audio files - compressed
40
- *.aac filter=lfs diff=lfs merge=lfs -text
41
- *.flac filter=lfs diff=lfs merge=lfs -text
42
- *.mp3 filter=lfs diff=lfs merge=lfs -text
43
- *.ogg filter=lfs diff=lfs merge=lfs -text
44
- *.wav filter=lfs diff=lfs merge=lfs -text
45
- # Image files - uncompressed
46
- *.bmp filter=lfs diff=lfs merge=lfs -text
47
- *.gif filter=lfs diff=lfs merge=lfs -text
48
- *.png filter=lfs diff=lfs merge=lfs -text
49
- *.tiff filter=lfs diff=lfs merge=lfs -text
50
- # Image files - compressed
51
- *.jpg filter=lfs diff=lfs merge=lfs -text
52
- *.jpeg filter=lfs diff=lfs merge=lfs -text
53
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/validation-00000-of-00001-6393d2add96db558.parquet → CSAbstruct/csabstruct-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2bfa3978e73385878b06508a579ae3afc09c517a8e3c6b650432a774643807f7
3
- size 183552
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0ee556f54efc0ef4afbf2c81da14ac5655ae40e4bd8e7730d372ccf6de7c5b7
3
+ size 125536
data/train-00000-of-00001-e4ddac953345ce34.parquet → CSAbstruct/csabstruct-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6c88b2a58e009f154c78ac6c793697f5f2fb2bf37ccb5b5574182fe015819c0
3
- size 1023643
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc8a8200511a3b695d7f438d0ffd5fd07437cb67bdd82d0a8209319369ced63e
3
+ size 1032074
data/test-00000-of-00001-be7e891381aedbe0.parquet → CSAbstruct/csabstruct-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2472bd3009f928d06c1e5e01296f7c361aa5206c3a505a87214a1e35106172eb
3
- size 124772
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f352ac8798c0ded92ea7f211eaa675456ba389a32d0f3ea5d439ad7ad8ec1930
3
+ size 184316
README.md DELETED
@@ -1,62 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
-
6
- # CSAbstruct
7
-
8
- CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
9
-
10
- It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
11
-
12
-
13
- ## Dataset Construction Details
14
-
15
- CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
16
- The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
17
- Therefore, there is more variety in writing styles in CSAbstruct.
18
- CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
19
- E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
20
-
21
- We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
22
- Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
23
- The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
24
- A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
25
- We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
26
- Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
27
- Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
28
-
29
- ## Dataset Statistics
30
-
31
- | Statistic | Avg ± std |
32
- |--------------------------|-------------|
33
- | Doc length in sentences | 6.7 ± 1.99 |
34
- | Sentence length in words | 21.8 ± 10.0 |
35
-
36
- | Label | % in Dataset |
37
- |---------------|--------------|
38
- | `BACKGROUND` | 33% |
39
- | `METHOD` | 32% |
40
- | `RESULT` | 21% |
41
- | `OBJECTIVE` | 12% |
42
- | `OTHER` | 03% |
43
-
44
- ## Citation
45
-
46
- If you use this dataset, please cite the following paper:
47
-
48
- ```
49
- @inproceedings{Cohan2019EMNLP,
50
- title={Pretrained Language Models for Sequential Sentence Classification},
51
- author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
52
- year={2019},
53
- booktitle={EMNLP},
54
- }
55
- ```
56
-
57
- [1]: https://arxiv.org/abs/1909.04054
58
- [2]: https://aclanthology.org/D19-1383
59
- [3]: https://github.com/Franck-Dernoncourt/pubmed-rct
60
- [4]: https://aclanthology.org/N18-3011/
61
- [5]: https://www.figure-eight.com/
62
- [6]: https://github.com/allenai/sequential_sentence_classification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
csabstruct.py DELETED
@@ -1,121 +0,0 @@
1
- """
2
- Dataset from https://github.com/allenai/sequential_sentence_classification
3
-
4
- Dataset maintainer: @soldni
5
- """
6
-
7
-
8
- import json
9
- from typing import Iterable, Sequence, Tuple
10
-
11
- import datasets
12
- from datasets.builder import BuilderConfig, GeneratorBasedBuilder
13
- from datasets.info import DatasetInfo
14
- from datasets.splits import Split, SplitGenerator
15
- from datasets.utils.logging import get_logger
16
-
17
- LOGGER = get_logger(__name__)
18
-
19
-
20
- _NAME = "CSAbstruct"
21
- _CITATION = """\
22
- @inproceedings{Cohan2019EMNLP,
23
- title={Pretrained Language Models for Sequential Sentence Classification},
24
- author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
25
- year={2019},
26
- booktitle={EMNLP},
27
- }
28
- """
29
- _LICENSE = "Apache License 2.0"
30
- _DESCRIPTION = """\
31
- As a step toward better document-level understanding, we explore \
32
- classification of a sequence of sentences into their corresponding \
33
- categories, a task that requires understanding sentences in context \
34
- of the document. Recent successful models for this task have used \
35
- hierarchical models to contextualize sentence representations, and \
36
- Conditional Random Fields (CRFs) to incorporate dependencies between \
37
- subsequent labels. In this work, we show that pretrained language \
38
- models, BERT (Devlin et al., 2018) in particular, can be used for \
39
- this task to capture contextual dependencies without the need for \
40
- hierarchical encoding nor a CRF. Specifically, we construct a joint \
41
- sentence representation that allows BERT Transformer layers to \
42
- directly utilize contextual information from all words in all \
43
- sentences. Our approach achieves state-of-the-art results on four \
44
- datasets, including a new dataset of structured scientific abstracts.
45
- """
46
- _HOMEPAGE = "https://github.com/allenai/sequential_sentence_classification"
47
- _VERSION = "1.0.0"
48
-
49
- _URL = (
50
- "https://raw.githubusercontent.com/allenai/"
51
- "sequential_sentence_classification/master/"
52
- )
53
-
54
- _SPLITS = {
55
- Split.TRAIN: _URL + "data/CSAbstruct/train.jsonl",
56
- Split.VALIDATION: _URL + "data/CSAbstruct/dev.jsonl",
57
- Split.TEST: _URL + "data/CSAbstruct/test.jsonl",
58
- }
59
-
60
-
61
- class CSAbstruct(GeneratorBasedBuilder):
62
- """CSAbstruct"""
63
-
64
- BUILDER_CONFIGS = [
65
- BuilderConfig(
66
- name=_NAME,
67
- version=datasets.Version(_VERSION),
68
- description=_DESCRIPTION,
69
- )
70
- ]
71
-
72
- def _info(self) -> DatasetInfo:
73
- class_labels = ["background", "method", "objective", "other", "result"]
74
-
75
- features = datasets.Features(
76
- {
77
- "abstract_id": datasets.Value("string"),
78
- "sentences": [datasets.Value("string")],
79
- "labels": [datasets.ClassLabel(names=class_labels)],
80
- "confs": [datasets.Value("float")],
81
- }
82
- )
83
-
84
- return DatasetInfo(
85
- description=_DESCRIPTION,
86
- features=features,
87
- supervised_keys=None,
88
- homepage=_HOMEPAGE,
89
- license=_LICENSE,
90
- citation=_CITATION,
91
- )
92
-
93
- def _split_generators(
94
- self, dl_manager: datasets.DownloadManager
95
- ) -> Sequence[SplitGenerator]:
96
- archive = dl_manager.download(_SPLITS)
97
-
98
- return [
99
- SplitGenerator(
100
- name=split_name, # type: ignore
101
- gen_kwargs={
102
- "split_name": split_name,
103
- "filepath": archive[split_name], # type: ignore
104
- },
105
- )
106
- for split_name in _SPLITS
107
- ]
108
-
109
- def _generate_examples(
110
- self, split_name: str, filepath: str
111
- ) -> Iterable[Tuple[str, dict]]:
112
- """This function returns the examples in the raw (text) form."""
113
-
114
- LOGGER.info(f"generating examples from documents in {filepath}...")
115
-
116
- with open(filepath, mode="r", encoding="utf-8") as f:
117
- data = [json.loads(ln) for ln in f]
118
-
119
- for i, row in enumerate(data):
120
- row["abstract_id"] = f"{split_name}_{i:04d}"
121
- yield row["abstract_id"], row
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"allenai--csabstruct": {"description": "As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document. Recent successful models for this task have used hierarchical models to contextualize sentence representations, and Conditional Random Fields (CRFs) to incorporate dependencies between subsequent labels. In this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF. Specifically, we construct a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences. Our approach achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts.\n", "citation": "@inproceedings{Cohan2019EMNLP,\n title={Pretrained Language Models for Sequential Sentence Classification},\n author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},\n year={2019},\n booktitle={EMNLP},\n}\n", "homepage": "https://github.com/allenai/sequential_sentence_classification", "license": "Apache License 2.0", "features": {"abstract_id": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [{"dtype": "string", "id": null, "_type": "Value"}], "labels": [{"num_classes": 5, "names": ["background", "method", "objective", "other", "result"], "id": null, "_type": "ClassLabel"}], "confs": [{"dtype": "float32", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "csabstruct", "config_name": "CSAbstruct", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1892682, "num_examples": 1668, "dataset_name": "csabstruct"}, "validation": {"name": "validation", "num_bytes": 335336, "num_examples": 295, "dataset_name": "csabstruct"}, "test": {"name": "test", "num_bytes": 226902, "num_examples": 226, "dataset_name": "csabstruct"}}, "download_checksums": null, "download_size": 1331967, "post_processing_size": null, "dataset_size": 2454920, "size_in_bytes": 3786887}}