RonanMcGovern commited on
Commit
9beb17c
1 Parent(s): 3dc9194

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1k
14
+ source_datasets:
15
+ - big_patent
16
+ task_categories:
17
+ - summarization
18
+ task_ids: []
19
+ paperswithcode_id: bigpatent
20
+ pretty_name: Big Patent <100k characters
21
+ tags:
22
+ - patent-summarization
23
+ ---
24
+ # Sampled Big Patent Dataset
25
+ This is a sampled Trelis/big_patent_sample dataset containing rows of data with descriptions shorter than or equal to 100,000 characters in length.
26
+
27
+ --- Sampled from Trelis/big_patent_sampled ---
28
+
29
+ # Sampled big_patent Dataset
30
+ This is a sampled big_patent dataset - sampled down for shorter fine-tunings.
31
+
32
+ The data is sampled with the aim of providing an even distribution across data lengths. The distribution is quite flat up until 1 million characters in length, making the dataset good for training on lengths up to 250,000 tokens.
33
+
34
+ # Dataset Card for Big Patent
35
+
36
+ ## Table of Contents
37
+ - [Dataset Description](#dataset-description)
38
+ - [Dataset Summary](#dataset-summary)
39
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-fields)
44
+ - [Data Splits](#data-splits)
45
+ - [Dataset Creation](#dataset-creation)
46
+ - [Curation Rationale](#curation-rationale)
47
+ - [Source Data](#source-data)
48
+ - [Annotations](#annotations)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
51
+ - [Social Impact of Dataset](#social-impact-of-dataset)
52
+ - [Discussion of Biases](#discussion-of-biases)
53
+ - [Other Known Limitations](#other-known-limitations)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+ - [Contributions](#contributions)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Homepage:** [Big Patent](https://evasharma.github.io/bigpatent/)
63
+ - **Repository:**
64
+ - **Paper:** [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://arxiv.org/abs/1906.03741)
65
+ - **Leaderboard:**
66
+ - **Point of Contact:** [Lu Wang](mailto:wangluxy@umich.edu)
67
+
68
+ ### Dataset Summary
69
+
70
+ BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.
71
+ Each US patent application is filed under a Cooperative Patent Classification (CPC) code.
72
+ There are nine such classification categories:
73
+ - a: Human Necessities
74
+ - b: Performing Operations; Transporting
75
+ - c: Chemistry; Metallurgy
76
+ - d: Textiles; Paper
77
+ - e: Fixed Constructions
78
+ - f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting
79
+ - g: Physics
80
+ - h: Electricity
81
+ - y: General tagging of new or cross-sectional technology
82
+
83
+ Current defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:
84
+ ```python
85
+ from datasets import load_dataset
86
+ ds = load_dataset("big_patent") # default is 'all' CPC codes
87
+ ds = load_dataset("big_patent", "all") # the same as above
88
+ ds = load_dataset("big_patent", "a") # only 'a' CPC codes
89
+ ds = load_dataset("big_patent", codes=["a", "b"])
90
+ ```
91
+
92
+ To use 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:
93
+ ```python
94
+ ds = load_dataset("big_patent", codes="all", version="1.0.0")
95
+ ds = load_dataset("big_patent", codes="a", version="1.0.0")
96
+ ds = load_dataset("big_patent", codes=["a", "b"], version="1.0.0")
97
+ ```
98
+
99
+
100
+ ### Supported Tasks and Leaderboards
101
+
102
+ [More Information Needed]
103
+
104
+ ### Languages
105
+
106
+ English
107
+
108
+ ## Dataset Structure
109
+
110
+ ### Data Instances
111
+
112
+ Each instance contains a pair of `description` and `abstract`. `description` is extracted from the Description section of the Patent while `abstract` is extracted from the Abstract section.
113
+ ```
114
+ {
115
+ 'description': 'FIELD OF THE INVENTION \n [0001] This invention relates to novel calcium phosphate-coated implantable medical devices and processes of making same. The unique calcium-phosphate coated implantable medical devices minimize...',
116
+ 'abstract': 'This invention relates to novel calcium phosphate-coated implantable medical devices...'
117
+ }
118
+ ```
119
+
120
+ ### Data Fields
121
+
122
+ - `description`: detailed description of patent.
123
+ - `abstract`: Patent abastract.
124
+
125
+ ### Data Splits
126
+
127
+ | | train | validation | test |
128
+ |:----|------------------:|-------------:|-------:|
129
+ | all | 1207222 | 67068 | 67072 |
130
+ | a | 174134 | 9674 | 9675 |
131
+ | b | 161520 | 8973 | 8974 |
132
+ | c | 101042 | 5613 | 5614 |
133
+ | d | 10164 | 565 | 565 |
134
+ | e | 34443 | 1914 | 1914 |
135
+ | f | 85568 | 4754 | 4754 |
136
+ | g | 258935 | 14385 | 14386 |
137
+ | h | 257019 | 14279 | 14279 |
138
+ | y | 124397 | 6911 | 6911 |
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ [More Information Needed]
145
+
146
+ ### Source Data
147
+
148
+ #### Initial Data Collection and Normalization
149
+
150
+ [More Information Needed]
151
+
152
+ #### Who are the source language producers?
153
+
154
+ [More Information Needed]
155
+
156
+ ### Annotations
157
+
158
+ #### Annotation process
159
+
160
+ [More Information Needed]
161
+
162
+ #### Who are the annotators?
163
+
164
+ [More Information Needed]
165
+
166
+ ### Personal and Sensitive Information
167
+
168
+ [More Information Needed]
169
+
170
+ ## Considerations for Using the Data
171
+
172
+ ### Social Impact of Dataset
173
+
174
+ [More Information Needed]
175
+
176
+ ### Discussion of Biases
177
+
178
+ [More Information Needed]
179
+
180
+ ### Other Known Limitations
181
+
182
+ [More Information Needed]
183
+
184
+ ## Additional Information
185
+
186
+ ### Dataset Curators
187
+
188
+ [More Information Needed]
189
+
190
+ ### Licensing Information
191
+
192
+ [More Information Needed]
193
+
194
+ ### Citation Information
195
+
196
+ ```bibtex
197
+ @article{DBLP:journals/corr/abs-1906-03741,
198
+ author = {Eva Sharma and
199
+ Chen Li and
200
+ Lu Wang},
201
+ title = {{BIGPATENT:} {A} Large-Scale Dataset for Abstractive and Coherent
202
+ Summarization},
203
+ journal = {CoRR},
204
+ volume = {abs/1906.03741},
205
+ year = {2019},
206
+ url = {http://arxiv.org/abs/1906.03741},
207
+ eprinttype = {arXiv},
208
+ eprint = {1906.03741},
209
+ timestamp = {Wed, 26 Jun 2019 07:14:58 +0200},
210
+ biburl = {https://dblp.org/rec/journals/corr/abs-1906-03741.bib},
211
+ bibsource = {dblp computer science bibliography, https://dblp.org}
212
+ }
213
+ ```
214
+
215
+ ### Contributions
216
+
217
+ Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.