parquet-converter commited on
Commit
6a038b9
β€’
1 Parent(s): 34a633b

Update parquet files

Browse files
README.md DELETED
@@ -1,149 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - odc-by
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Multi-LexSum
13
- size_categories:
14
- - 1K<n<10K
15
- - 10K<n<100K
16
- source_datasets:
17
- - original
18
- tags: []
19
- task_categories:
20
- - summarization
21
- task_ids: []
22
- ---
23
-
24
- # Dataset Card for Multi-LexSum
25
- ## Table of Contents
26
- - [Dataset Card for Multi-LexSum](#dataset-card-for-multi-lexsum)
27
- - [Table of Contents](#table-of-contents)
28
- - [Dataset Description](#dataset-description)
29
- - [Dataset Summary](#dataset-summary)
30
- - [Languages](#languages)
31
- - [Dataset](#dataset)
32
- - [Data Fields](#data-fields)
33
- - [Data Splits](#data-splits)
34
- - [Dataset Sheet (Datasheet)](#dataset-sheet-datasheet)
35
- - [Additional Information](#additional-information)
36
- - [Dataset Curators](#dataset-curators)
37
- - [Licensing Information](#licensing-information)
38
- - [Citation Information](#citation-information)
39
- - [Release History](#release-history)
40
-
41
- ## Dataset Description
42
-
43
- - **Homepage:** https://multilexsum.github.io
44
- - **Repository:** https://github.com/multilexsum/dataset
45
- - **Paper:** https://arxiv.org/abs/2206.10883
46
-
47
- <p>
48
- <a href="https://multilexsum.github.io" style="display: inline-block;">
49
- <img src="https://img.shields.io/badge/-homepage-informational.svg?logo=jekyll" title="Multi-LexSum Paper" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
50
- <a href="https://github.com/multilexsum/dataset" style="display: inline-block;">
51
- <img src="https://img.shields.io/badge/-multilexsum-lightgrey.svg?logo=github" title="Multi-LexSum Github Repo" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
52
- <a href="https://arxiv.org/abs/2206.10883" style="display: inline-block;">
53
- <img src="https://img.shields.io/badge/NeurIPS-2022-9cf" title="Multi-LexSum is accepted in NeurIPS 2022" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
54
- </p>
55
-
56
- ### Talk @ NeurIPS 2022
57
-
58
- [![Watch the video](https://img.youtube.com/vi/C-fwW_ZhkE8/0.jpg)](https://youtu.be/C-fwW_ZhkE8)
59
-
60
-
61
- ### Dataset Summary
62
-
63
- The Multi-LexSum dataset is a collection of 9,280 such legal case summaries. Multi-LexSum is distinct from other datasets in its **multiple target summaries, each at a different granularity** (ranging from one-sentence β€œextreme” summaries to multi-paragraph narrations of over five hundred words). It presents a challenging multi-document summarization task given **the long length of the source documents**, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of **expert-authored summaries**: the expertsβ€”lawyers and law studentsβ€”are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.
64
-
65
- ### Languages
66
-
67
- English
68
-
69
- ## Dataset
70
-
71
- ### Data Fields
72
-
73
- The dataset contains a list of instances (cases); each instance contains the following data:
74
-
75
- | Field | Description |
76
- | ------------: | -------------------------------------------------------------------------------: |
77
- | id | `(str)` The case ID |
78
- | sources | `(List[str])` A list of strings for the text extracted from the source documents |
79
- | summary/long | `(str)` The long (multi-paragraph) summary for this case |
80
- | summary/short | `(Optional[str])` The short (one-paragraph) summary for this case |
81
- | summary/tiny | `(Optional[str])` The tiny (one-sentence) summary for this case |
82
-
83
- Please check the exemplar usage below for loading the data:
84
-
85
- ```python
86
- from datasets import load_dataset
87
-
88
- multi_lexsum = load_dataset("allenai/multi_lexsum", name="v20220616")
89
- # Download multi_lexsum locally and load it as a Dataset object
90
-
91
- example = multi_lexsum["validation"][0] # The first instance of the dev set
92
- example["sources"] # A list of source document text for the case
93
-
94
- for sum_len in ["long", "short", "tiny"]:
95
- print(example["summary/" + sum_len]) # Summaries of three lengths
96
- ```
97
-
98
- ### Data Splits
99
-
100
- | | Instances | Source Documents (D) | Long Summaries (L) | Short Summaries (S) | Tiny Summaries (T) | Total Summaries |
101
- | ----------: | --------: | -------------------: | -----------------: | ------------------: | -----------------: | --------------: |
102
- | Train (70%) | 3,177 | 28,557 | 3,177 | 2,210 | 1,130 | 6,517 |
103
- | Test (20%) | 908 | 7,428 | 908 | 616 | 312 | 1,836 |
104
- | Dev (10%) | 454 | 4,134 | 454 | 312 | 161 | 927 |
105
-
106
-
107
- ## Dataset Sheet (Datasheet)
108
-
109
- Please check our [dataset sheet](https://multilexsum.github.io/datasheet) for details regarding dataset creation, source data, annotation, and considerations for the usage.
110
-
111
- ## Additional Information
112
-
113
- ### Dataset Curators
114
-
115
- The dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.
116
-
117
- ### Licensing Information
118
-
119
- The Multi-LexSum dataset is distributed under the [Open Data Commons Attribution License (ODC-By)](https://opendatacommons.org/licenses/by/1-0/).
120
- The case summaries and metadata are licensed under the [Creative Commons Attribution License (CC BY-NC)](https://creativecommons.org/licenses/by-nc/4.0/), and the source documents are already in the public domain.
121
- Commercial users who desire a license for summaries and metadata can contact [info@clearinghouse.net](mailto:info@clearinghouse.net), which will allow free use but limit summary re-posting.
122
- The corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.
123
-
124
- ### Citation Information
125
-
126
-
127
- ```
128
- @article{Shen2022MultiLexSum,
129
- author = {Zejiang Shen and
130
- Kyle Lo and
131
- Lauren Yu and
132
- Nathan Dahlberg and
133
- Margo Schlanger and
134
- Doug Downey},
135
- title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
136
- journal = {CoRR},
137
- volume = {abs/2206.10883},
138
- year = {2022},
139
- url = {https://doi.org/10.48550/arXiv.2206.10883},
140
- doi = {10.48550/arXiv.2206.10883}
141
- }
142
- ```
143
-
144
-
145
- ## Release History
146
-
147
- | Version | Description |
148
- | ----------: | -----------------------: |
149
- | `v20220616` | The initial v1.0 release |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
multi_lexsum.py DELETED
@@ -1,155 +0,0 @@
1
- from typing import List, Union, Dict, Any, Tuple
2
- import json
3
- import os
4
-
5
- import datasets
6
- from datasets.tasks import Summarization
7
-
8
- logger = datasets.logging.get_logger(__name__)
9
-
10
-
11
- def _load_jsonl(filename):
12
- with open(filename, "r") as fp:
13
- jsonl_content = fp.read()
14
-
15
- result = [json.loads(jline) for jline in jsonl_content.splitlines()]
16
- return result
17
-
18
-
19
- def _load_json(filepath):
20
-
21
- with open(filepath, "r") as fp:
22
- res = json.load(fp)
23
- return res
24
-
25
-
26
- _CITATION = """
27
- @article{Shen2022MultiLexSum,
28
- author = {Zejiang Shen and
29
- Kyle Lo and
30
- Lauren Yu and
31
- Nathan Dahlberg and
32
- Margo Schlanger and
33
- Doug Downey},
34
- title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
35
- journal = {CoRR},
36
- volume = {abs/2206.10883},
37
- year = {2022},
38
- url = {https://doi.org/10.48550/arXiv.2206.10883},
39
- doi = {10.48550/arXiv.2206.10883}
40
- }
41
- """ # TODO
42
-
43
- _DESCRIPTION = """
44
- Multi-LexSum is a multi-doc summarization dataset for civil rights litigation lawsuits with summaries of three granularities.
45
- """ # TODO: Update with full abstract
46
-
47
- _HOMEPAGE = "https://multilexsum.github.io"
48
-
49
- # _BASE_URL = "https://ai2-s2-research.s3.us-west-2.amazonaws.com/multilexsum/releases"
50
- _BASE_URL = "https://huggingface.co/datasets/allenai/multi_lexsum/resolve/main/releases"
51
- _FILES = {
52
- "train": "train.json",
53
- "dev": "dev.json",
54
- "test": "test.json",
55
- "sources": "sources.json",
56
- }
57
-
58
-
59
- class MultiLexsumConfig(datasets.BuilderConfig):
60
- """BuilderConfig for LexSum."""
61
-
62
- def __init__(self, **kwargs):
63
- """BuilderConfig for LexSum.
64
- Args:
65
- **kwargs: keyword arguments forwarded to super.
66
- """
67
- super(MultiLexsumConfig, self).__init__(**kwargs)
68
-
69
-
70
- class MultiLexsum(datasets.GeneratorBasedBuilder):
71
- """MultiLexSum Dataset: a multi-doc summarization dataset for
72
- civil rights litigation lawsuits with summaries of three granularities.
73
- """
74
-
75
- BUILDER_CONFIGS = [
76
- MultiLexsumConfig(
77
- name="v20220616",
78
- version=datasets.Version("1.0.0", "Public v1.0 release."),
79
- description="The v1.0 Multi-LexSum dataset",
80
- ),
81
- ]
82
-
83
- def _info(self):
84
- return datasets.DatasetInfo(
85
- description=_DESCRIPTION,
86
- features=datasets.Features(
87
- {
88
- "id": datasets.Value("string"),
89
- "sources": datasets.Sequence(datasets.Value("string")),
90
- "summary/long": datasets.Value("string"),
91
- "summary/short": datasets.Value("string"),
92
- "summary/tiny": datasets.Value("string"),
93
- }
94
- ),
95
- supervised_keys=None,
96
- homepage=_HOMEPAGE,
97
- citation=_CITATION,
98
- task_templates=[
99
- Summarization(text_column="source", summary_column="summary/long")
100
- ],
101
- )
102
-
103
- def _split_generators(self, dl_manager):
104
-
105
- base_url = _BASE_URL if self.config.data_dir is None else self.config.data_dir
106
- downloaded_files = dl_manager.download_and_extract(
107
- {
108
- name: f"{base_url}/{self.config.name}/{filename}"
109
- for name, filename in _FILES.items()
110
- }
111
- )
112
- # Given sources is a large file, we read it first
113
- sources = _load_json(downloaded_files["sources"])
114
-
115
- return [
116
- datasets.SplitGenerator(
117
- name=datasets.Split.TRAIN,
118
- gen_kwargs={
119
- "subset_file": downloaded_files["train"],
120
- "sources": sources,
121
- },
122
- ),
123
- datasets.SplitGenerator(
124
- name=datasets.Split.VALIDATION,
125
- gen_kwargs={
126
- "subset_file": downloaded_files["dev"],
127
- "sources": sources,
128
- },
129
- ),
130
- datasets.SplitGenerator(
131
- name=datasets.Split.TEST,
132
- gen_kwargs={
133
- "subset_file": downloaded_files["test"],
134
- "sources": sources,
135
- },
136
- ),
137
- ]
138
-
139
- def _generate_examples(self, subset_file: str, sources: Dict[str, Dict]):
140
- """This function returns the examples in the raw (text) form."""
141
- logger.info(f"generating examples from = {subset_file}")
142
-
143
- subset_cases = _load_jsonl(subset_file)
144
- for case_data in subset_cases:
145
- case_sources = [
146
- sources[source_id]["doc_text"]
147
- for source_id in case_data["case_documents"]
148
- ]
149
- yield case_data["case_id"], {
150
- "id": case_data["case_id"],
151
- "sources": case_sources,
152
- "summary/long": case_data["summary/long"],
153
- "summary/short": case_data["summary/short"],
154
- "summary/tiny": case_data["summary/tiny"],
155
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
releases/v20220616/train.json β†’ v20220616/multi_lexsum-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:edb45aee04a4aa1eebce2ec05880304322f29baf1b7bcf2c23409c020cae6aca
3
- size 15711733
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca89ee8897fcd531a1f95d1f21d678220141048919b44b3d70dd6eb3106d8857
3
+ size 144559608
releases/v20220616/dev.json β†’ v20220616/multi_lexsum-train-00000-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:00b5110633b1b3d5c33e6cc4645525b3633cbb1844fcdc63315f6b2b340fa958
3
- size 2281645
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb0e65d9388a8831f0e655153acc52ca91b80599a6faea8d0bf1102925f0d5bd
3
+ size 377305194
releases/v20220616/sources.json β†’ v20220616/multi_lexsum-train-00001-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d15a29d05ee4c8052270bf630633ec5b8c5ff7dab6cc480c009dc50f76c8ce24
3
- size 2219115572
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2370cd9a115cfb82d2822aecf850b56df4117e960044a8f1acf7581ca285782
3
+ size 218168248
releases/v20220616/test.json β†’ v20220616/multi_lexsum-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:64fafb058f84cef09fc3d89df3ef913ee824d28f62bde9c1c9f52cc4d7c5b40a
3
- size 4272330
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efdef98da7bee246e116bac11f8dd6f95cdedf2a753598dbf8fa4977c77cfdbf
3
+ size 94415701