Datasets:
File size: 2,942 Bytes
b9bd58b 907a818 b9bd58b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
language:
- en
pretty_name: Recursalberg
---
# Dataset Card for Recursalberg
![](Recursalberg.png "Clara is a dedicated volunteer and digital archivist. Inspired by her ancestor, she is committed to making literature accessible to everyone. With a background in library science and a deep appreciation for cultural works, Clara spends her days digitizing rare books and proofreading texts. Her character design reflects her role as a guardian of knowledge, bridging the gap between the historical significance of printed books and the accessibility of digital formats. Clara's serene and knowledgeable presence makes her a relatable and inspiring figure for readers and volunteers alike.")
*Waifu to catch your attention.*
## Dataset Details
### Dataset Description
*Recursalberg* is a cleaned dataset of Project Gutenberg books. We downloaded all the publicly available Gutenberg books at the time and processed them.
Filtering to a total amount of tokens of **~XYZB** (llama-2-7b-chat-tokenizer) / **~XYZ** (RWKV Tokenizer) from primarily English language.
- **Curated by:** KaraKaraWitch
- **Funded by [optional]:** Recursal.ai (I work there lol)
- **Shared by [optional]:** KaraKaraWitch
- **Language(s) (NLP):** Primarily English
- **License:** cc-by-sa-4.0
### Dataset Sources [optional]
- **Source Data:** [gutenberg.org (see mirroring)](https://gutenberg.org/help/mirroring.html) (rclone download)
### Processing
We performed the following downloading and processing steps to prepare Retenberg.
1. Use gutenberg's mirror rclone to download to a folder.
2. Index gutenberg with `gutenberg_index.py` (Gather all items to find html documents.)
3. Process all book indexes that has a html version.
- Remove Gutenberg pre html block, page numbers, html comments, table of contents
- Convert sections to markdown
- Clean new lines, standardize punctuations.
4. Save each html file into 1 jsonl file.
### Data Keys
```
text (str): the book's text. converted to markdown.
```
### Dataset Curators
KaraKaraWitch. (I typically hangout in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
I'd be happy if you could spread the word and recommend this dataset for your use cases `:)`
### Licensing Information
Complicated. Refer to gutenberg's license [here](https://www.gutenberg.org/policy/license.html),
We haven't seen any in-copyrighted books in our dataset so far so we assume it's safe for usage, unless otherwise.
### Citation Information
```
@ONLINE{recursalberg,
title = {Recursalberg},
author = {KaraKaraWitch, recursal.ai},
year = {2023},
howpublished = {\url{TBD}},
}
``` |