Pclanglais
commited on
Commit
•
b81c895
1
Parent(s):
8091532
Update README.md
Browse files
README.md
CHANGED
@@ -3,9 +3,10 @@ license: cc0-1.0
|
|
3 |
---
|
4 |
**French-Public Domain-Book** or **French-PD-Books** is a large collection aiming to agregate all the French monographies in the public domain.
|
5 |
|
6 |
-
The collection has been curated
|
7 |
|
8 |
-
|
|
|
9 |
|
10 |
This initial agregation was made possible thanks to the open data program of the French National Library and the consolidation of public domain status for cultural heritage works in the EU with the 2019 Copyright Directive (art. 14)
|
11 |
|
@@ -17,7 +18,7 @@ The primary use of the collection is for cultural analytics project on a wide sc
|
|
17 |
The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
|
18 |
|
19 |
## Future developments
|
20 |
-
|
21 |
This dataset is not a one time work but will continue to evolve significantly on two directions:
|
22 |
-
*
|
|
|
23 |
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books. Despite the application of restrictions
|
|
|
3 |
---
|
4 |
**French-Public Domain-Book** or **French-PD-Books** is a large collection aiming to agregate all the French monographies in the public domain.
|
5 |
|
6 |
+
The collection has been originally curated by Benoît de Courson, Benjamin Azoulay for the Gallicagram project and reconfigured for large scale data use by Pierre-Carl Langlais.
|
7 |
|
8 |
+
## Content
|
9 |
+
As of January 2024, the collection contains 289,000 books from the French National Library (Gallica). Each parquet file has the full text of 2,000 books selected at random and few core metadatas (Gallica id, title, author, word counts…). The metadata can be easily expanded thanks to the BNF API.
|
10 |
|
11 |
This initial agregation was made possible thanks to the open data program of the French National Library and the consolidation of public domain status for cultural heritage works in the EU with the 2019 Copyright Directive (art. 14)
|
12 |
|
|
|
18 |
The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
|
19 |
|
20 |
## Future developments
|
|
|
21 |
This dataset is not a one time work but will continue to evolve significantly on two directions:
|
22 |
+
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s) and some documents should be. Future versions will strive either to re-OCRize the original text or use experimental LLM models for partial OCR correction.
|
23 |
+
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models (like COLAF) on the original PDF files.
|
24 |
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books. Despite the application of restrictions
|