IsmaelMousa
commited on
Commit
•
85ab365
1
Parent(s):
226e3ef
Update the dataset card
Browse files
README.md
CHANGED
@@ -12,13 +12,12 @@ multilinguality:
|
|
12 |
paperswithcode_id: bookcorpus
|
13 |
pretty_name: books
|
14 |
size_categories:
|
15 |
-
-
|
16 |
source_datasets:
|
17 |
- original
|
18 |
tags:
|
19 |
- books
|
20 |
- categories
|
21 |
-
- txt
|
22 |
- nlp
|
23 |
- adventure
|
24 |
- biographies
|
@@ -39,54 +38,43 @@ task_ids:
|
|
39 |
|
40 |
# Books
|
41 |
|
42 |
-
The books dataset consists of a diverse collection of books organized into *9* categories,
|
43 |
-
each category is represented as a directory, containing books saved as `.txt` files.
|
44 |
|
45 |
-
This dataset is designed to support various natural language processing (NLP) tasks, including `text generation` and `masked language modeling`.
|
46 |
|
47 |
## Details
|
48 |
|
49 |
-
|
50 |
-
|
51 |
-
-
|
52 |
-
-
|
53 |
-
-
|
54 |
-
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
- adventure
|
62 |
-
- biographies
|
63 |
-
- children
|
64 |
-
- classic
|
65 |
-
- fantasy
|
66 |
-
- historical
|
67 |
-
- mystery
|
68 |
-
- romance
|
69 |
-
- science-fiction
|
70 |
-
- **Task Categories:**
|
71 |
-
- Text Generation
|
72 |
-
- Fill-Mask
|
73 |
-
- **Task IDs:**
|
74 |
-
- language-modeling
|
75 |
-
- masked-language-modeling
|
76 |
-
|
77 |
## Categories
|
78 |
|
79 |
The dataset is organized into the following categories:
|
80 |
|
81 |
-
1.
|
82 |
-
2.
|
83 |
-
3.
|
84 |
-
4.
|
85 |
-
5.
|
86 |
-
6.
|
87 |
-
7.
|
88 |
-
8.
|
89 |
-
9.
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
|
91 |
## Usage
|
92 |
|
@@ -100,7 +88,7 @@ from datasets import load_dataset
|
|
100 |
|
101 |
books = load_dataset("IsmaelMousa/books", split="train")
|
102 |
|
103 |
-
print(books["
|
104 |
```
|
105 |
|
106 |
|
@@ -116,4 +104,4 @@ The books in this dataset are sourced from [Project Gutenberg](https://www.guten
|
|
116 |
|
117 |
## License
|
118 |
|
119 |
-
The rights to the books are reserved by their respective authors. This dataset is provided under the Apache 2.0 license for both personal and commercial use, with proper attribution.
|
|
|
12 |
paperswithcode_id: bookcorpus
|
13 |
pretty_name: books
|
14 |
size_categories:
|
15 |
+
- n<1K
|
16 |
source_datasets:
|
17 |
- original
|
18 |
tags:
|
19 |
- books
|
20 |
- categories
|
|
|
21 |
- nlp
|
22 |
- adventure
|
23 |
- biographies
|
|
|
38 |
|
39 |
# Books
|
40 |
|
41 |
+
The books dataset consists of a diverse collection of books organized into *9* categories, it splitted to `train`, `validation` where the train contains *40* books, and the validation *9* books.
|
|
|
42 |
|
43 |
+
This dataset is cleaned well and designed to support various natural language processing (NLP) tasks, including `text generation` and `masked language modeling`.
|
44 |
|
45 |
## Details
|
46 |
|
47 |
+
The dataset contains 4 columns:
|
48 |
+
|
49 |
+
- title: The tilte of the book.
|
50 |
+
- author: The author of the book.
|
51 |
+
- category: The genre/category of the book.
|
52 |
+
- EN: The whole content of the book, in english. it's very very clean.
|
53 |
+
|
54 |
+
Tasks:
|
55 |
+
|
56 |
+
- Text Generation
|
57 |
+
- Fill-Mask
|
58 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
## Categories
|
60 |
|
61 |
The dataset is organized into the following categories:
|
62 |
|
63 |
+
1. Adventure: 5 books.
|
64 |
+
2. Biographies: 3 books.
|
65 |
+
3. Children: 4 books.
|
66 |
+
4. Classic: 7 books.
|
67 |
+
5. Fantasy: 3 books.
|
68 |
+
6. Historical: 6 books.
|
69 |
+
7. Mystery: 7 books.
|
70 |
+
8. Romance: 5 books.
|
71 |
+
9. Science-Fiction: 9 books.
|
72 |
+
|
73 |
+
## Splits
|
74 |
+
The dataset is splitted into the following splits:
|
75 |
+
|
76 |
+
1. train: 40 books.
|
77 |
+
2. validation: 9 books, 1 book from each category.
|
78 |
|
79 |
## Usage
|
80 |
|
|
|
88 |
|
89 |
books = load_dataset("IsmaelMousa/books", split="train")
|
90 |
|
91 |
+
print(books["EN"][500:600])
|
92 |
```
|
93 |
|
94 |
|
|
|
104 |
|
105 |
## License
|
106 |
|
107 |
+
The rights to the books are reserved by their respective authors. This dataset is provided under the Apache 2.0 license for both personal and commercial use, with proper attribution.
|