drvenabili commited on
Commit
9eb92bf
1 Parent(s): 652eeab

README and new script

Browse files
Files changed (2) hide show
  1. README.md +0 -176
  2. kubhist2.py +7 -19
README.md CHANGED
@@ -280,180 +280,4 @@ dataset_info:
280
  num_examples: 124880138
281
  download_size: 7483375536
282
  dataset_size: 7999426267
283
- license: cc-by-sa-4.0
284
- task_categories:
285
- - text-generation
286
- language:
287
- - sv
288
- tags:
289
- - newspapers
290
- - historical
291
- size_categories:
292
- - 1B<n<10B
293
  ---
294
-
295
- # kubhist2
296
-
297
- ## Dataset Description
298
-
299
- - **Homepage: https://changeiskey.org**
300
- - **Repository: https://github.com/ChangeIsKey/kubhist2**
301
- - **Point of Contact: Simon Hengchen / iguanodon.ai**
302
-
303
-
304
- ### Dataset Summary
305
-
306
- This is a version of the Kubhist 2 dataset originally created, curated and made available by Språkbanken Text (SBX) at the University of Gothenburg (Sweden) under the CC BY 4.0 license.
307
- This is a a corpus of OCRed newspapers from Sweden spanning the 1640s to the 1900s.
308
- The original data is available with many types of annotation in XML at https://spraakbanken.gu.se/en/resources/kubhist2.
309
- A good description of the original data is available in this blog entry by Dana Dannélls: https://spraakbanken.gu.se/blogg/index.php/2019/09/15/the-kubhist-corpus-of-swedish-newspapers/.
310
-
311
- If you use this dataset for academic research, cite it using the provided citation information at the bottom of this page.
312
-
313
- In a nutshell, this hugginface dataset version offers:
314
- - only the OCRed text
315
- - available in decadal subsets
316
- - one line per sentence, sentences shorter than 4 words were discarded
317
-
318
- In total this dataset contains 2,819,065,590 tokens. A distribution of tokens per decade is available below.
319
-
320
- License is CC BY 4.0 ShareAlike.
321
-
322
- ```bash
323
- (env) simon@terminus:/mnt/user/cik/kubhist2 wc -w text/*/*.txt
324
- 39348 text/1640/1640.txt
325
- 4700 text/1650/1650.txt
326
- 8524 text/1660/1660.txt
327
- 2396 text/1670/1670.txt
328
- 199670 text/1680/1680.txt
329
- 487943 text/1690/1690.txt
330
- 619884 text/1700/1700.txt
331
- 265930 text/1710/1710.txt
332
- 355759 text/1720/1720.txt
333
- 856218 text/1730/1730.txt
334
- 1589508 text/1740/1740.txt
335
- 2211316 text/1750/1750.txt
336
- 5496545 text/1760/1760.txt
337
- 14434932 text/1770/1770.txt
338
- 22366170 text/1780/1780.txt
339
- 26768856 text/1790/1790.txt
340
- 36225842 text/1800/1800.txt
341
- 44510588 text/1810/1810.txt
342
- 65571094 text/1820/1820.txt
343
- 95359730 text/1830/1830.txt
344
- 143992956 text/1840/1840.txt
345
- 214538699 text/1850/1850.txt
346
- 392672066 text/1860/1860.txt
347
- 524802728 text/1870/1870.txt
348
- 695859650 text/1880/1880.txt
349
- 498244203 text/1890/1890.txt
350
- 31580335 text/1900/1900.txt
351
- 2819065590 total
352
-
353
- ```
354
-
355
-
356
- ### Languages
357
-
358
- Swedish (nysvenska)
359
-
360
- ## Dataset Structure
361
-
362
- One feature: `text`.
363
-
364
-
365
- Load the whole corpus using
366
- ```python
367
- dataset = load_dataset("ChangeIsKey/kubhist2")
368
- ```
369
- or a decadal subset using
370
- ```python
371
- dataset = load_dataset("ChangeIsKey/kubhist2", "decade")
372
- ```
373
- The `decade` must be a string, valid values are within `range(1640, 1910, 10)`.
374
-
375
- You can combine several decades using `concatenate_datasets` like this:
376
-
377
- ```python
378
- from datasets import load_dataset, concatenate_datasets
379
-
380
- ds_1800 = load_dataset("ChangeIsKey/kubhist2", "1800")
381
- ds_1810 = load_dataset("ChangeIsKey/kubhist2", "1810")
382
- ds_1820 = load_dataset("ChangeIsKey/kubhist2", "1820")
383
-
384
- ds_1800_1820 = concatenate_datasets([
385
- ds_1800["train"],
386
- ds_1810["train"],
387
- ds_1820["train"]
388
- ])
389
- ```
390
-
391
-
392
- ### Data Splits
393
-
394
- The dataset has only one split, `train`.
395
-
396
- ## Dataset Creation
397
-
398
- ### Curation Rationale
399
-
400
- The original data is in a highly-annotated XML format not ideally suited for basic NLP tasks such as unsupervised language modeling: information such as page numbers, fonts, etc. is less relevant and has thus been discarded.
401
- Keeping only the running text of the newspaper and removing sentences shorter than 4 words further allows a 150x data size reduction (2.4TB --> 16GB).
402
-
403
- ### Source Data
404
-
405
- The original data is available with many types of annotation in XML at https://spraakbanken.gu.se/en/resources/kubhist2.
406
-
407
- #### Initial Data Collection and Normalization
408
-
409
- See on Språkbanken Text's website.
410
-
411
- #### Who are the source language producers?
412
-
413
- Språkbanken Text: https://spraakbanken.gu.se/en/
414
-
415
- ### Personal and Sensitive Information
416
-
417
- This is historical newspaper data, with the latest data published in 1909. Everyone mentioned in this dataset was probably already a public figure, and has been dead for a while.
418
-
419
- ## Considerations for Using the Data
420
-
421
- ### Discussion of Biases
422
-
423
- This is historical data. As such, outdated views might be present in the data.
424
-
425
- ### Other Known Limitations
426
-
427
- The data comes from an OCR process. The text is thus not perfect, especially so in the earlier decades.
428
-
429
- ## Additional Information
430
-
431
- ### Dataset Curators
432
-
433
- This huggingface version of the data has been created by Simon Hengchen.
434
-
435
- ### Licensing Information
436
-
437
- Creative Commons Attribution Share Alike 4.0: https://creativecommons.org/licenses/by-sa/4.0/
438
-
439
- ### Citation Information
440
-
441
- You should always cite the original kubhist2 release, provided below as bibtex. If you want to additionally refer to this specific version, please also add a link to the huggingface page: https://huggingface.co/datasets/ChangeIsKey/kubhist2.
442
-
443
- ```bibtex
444
- @misc{Kubhist2,
445
- title = {The Kubhist Corpus, v2},
446
- url = {https://spraakbanken.gu.se/korp/?mode=kubhist},
447
- author = {Spr{\aa}kbanken},
448
- year = {Downloaded in 2019},
449
- organization = {Department of Swedish, University of Gothenburg}
450
- }
451
-
452
- ```
453
-
454
- ### Acknowledgments
455
-
456
- This dataset has been created in the context of the [ChangeIsKey!](https://www.changeiskey.org/) project funded by Riksbankens Jubileumsfond under reference number M21-0021, Change is Key! program.
457
- The compute dedicated to the creation of the dataset has been provided by [iguanodon.ai](https://iguanodon.ai).
458
-
459
- Many thanks got to Språkbanken Text for creating and curating this resource.
 
280
  num_examples: 124880138
281
  download_size: 7483375536
282
  dataset_size: 7999426267
 
 
 
 
 
 
 
 
 
 
283
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
kubhist2.py CHANGED
@@ -54,7 +54,8 @@ _URLS = {'1640': './text/1640/1640.txt.gz',
54
  '1870': './text/1870/1870.txt.gz',
55
  '1880': './text/1880/1880.txt.gz',
56
  '1890': './text/1890/1890.txt.gz',
57
- '1900': './text/1900/1900.txt.gz'
 
58
  }
59
 
60
 
@@ -83,19 +84,12 @@ class kubhist2(datasets.GeneratorBasedBuilder):
83
  BUILDER_CONFIGS.append(
84
  kubhist2Config(
85
  name=key,
86
- version=datasets.Version("1.0.1", ""),
87
  description=f"Kubhist2: {key}",
88
  period=key,
89
  )
90
  )
91
- BUILDER_CONFIGS.append(
92
- kubhist2Config(
93
- name="all",
94
- version=datasets.Version("1.0.1", ""),
95
- description=f"Kubhist2: all",
96
- period="all",
97
- )
98
- )
99
  DEFAULT_CONFIG_NAME = "all"
100
 
101
  def _info(self):
@@ -113,16 +107,10 @@ class kubhist2(datasets.GeneratorBasedBuilder):
113
  )
114
 
115
  def _split_generators(self, dl_manager):
116
- if self.config.period != "all":
117
- url = {"train" : _URLS[self.config.period]}
118
- downloaded_files = dl_manager.download_and_extract(url)
119
- return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
120
 
121
- elif self.config.period == "all":
122
- url = {"train" : './text/all/all.txt.gz'}
123
- #print(url)
124
- downloaded_files = dl_manager.download_and_extract(url)
125
- return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
126
 
127
  def _generate_examples(self, filepath):
128
  """Yields examples."""
 
54
  '1870': './text/1870/1870.txt.gz',
55
  '1880': './text/1880/1880.txt.gz',
56
  '1890': './text/1890/1890.txt.gz',
57
+ '1900': './text/1900/1900.txt.gz',
58
+ 'all': './text/all/all.txt.gz',
59
  }
60
 
61
 
 
84
  BUILDER_CONFIGS.append(
85
  kubhist2Config(
86
  name=key,
87
+ version=datasets.Version("1.0.2", ""),
88
  description=f"Kubhist2: {key}",
89
  period=key,
90
  )
91
  )
92
+
 
 
 
 
 
 
 
93
  DEFAULT_CONFIG_NAME = "all"
94
 
95
  def _info(self):
 
107
  )
108
 
109
  def _split_generators(self, dl_manager):
110
+ url = {"train" : _URLS[self.config.period]}
111
+ downloaded_files = dl_manager.download_and_extract(url)
112
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
 
113
 
 
 
 
 
 
114
 
115
  def _generate_examples(self, filepath):
116
  """Yields examples."""