Pclanglais commited on
Commit
1194e5c
1 Parent(s): 39cb0ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,13 +14,13 @@ pretty_name: United States-Public Domain-Newspapers
14
  With nearly 100 billion words, it is one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
15
 
16
  ## Content
17
- As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions (98,742,987,471 words) from the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress, published from the 18th century to 1963. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
18
 
19
- This initial agregation was made possible thanks to the extensive open data program of the Library of Congress.
20
 
21
- The composition of the dataset adheres to the US criteria for public domain of collective works (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in all countries with a Berne author-right model.
22
 
23
- The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated version of the same resource, with significant enhancements of text quality and documentation. It currently retains about 10-20% of the original material.
24
 
25
  ## Uses
26
  The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.
 
14
  With nearly 100 billion words, it is one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
15
 
16
  ## Content
17
+ As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 18th century to 1963 (98,742,987,471 words).
18
 
19
+ The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
20
 
21
+ The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.
22
 
23
+ The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 10-20% of the original material.
24
 
25
  ## Uses
26
  The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.