Pclanglais commited on
Commit
bf7276a
1 Parent(s): 68d09e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -11,14 +11,14 @@ pretty_name: United States-Public Domain-Newspapers
11
 
12
  **US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
13
 
14
- With nearly 100 billion words, it is one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
15
 
16
  ## Content
17
  As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 18th century to 1963 (98,742,987,471 words).
18
 
19
  The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
20
 
21
- The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.
22
 
23
  The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 10-20% of the original material.
24
 
@@ -28,7 +28,9 @@ The primary use of the collection is for cultural analytics on a wide scale. It
28
  The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
29
 
30
  ## License
31
- The entire collection is in the public domain in the US and, likely, everywhere. The Library of Congress does not claim any additional rights: "As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections."
 
 
32
 
33
  ## Future developments
34
  This dataset is not a one time work but will continue to evolve significantly on several directions:
 
11
 
12
  **US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
13
 
14
+ With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.
15
 
16
  ## Content
17
  As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 18th century to 1963 (98,742,987,471 words).
18
 
19
  The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
20
 
21
+ While most of the collection is in (American) English, it also covers a variety of European and native american languages.
22
 
23
  The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 10-20% of the original material.
24
 
 
28
  The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
29
 
30
  ## License
31
+ The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.
32
+
33
+ The Library of Congress does not claim any additional rights: "As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections."
34
 
35
  ## Future developments
36
  This dataset is not a one time work but will continue to evolve significantly on several directions: