Datasets:
File size: 1,998 Bytes
ff65159 8d44cf2 ff65159 153e648 ebafb0e 153e648 ebafb0e 153e648 ebafb0e 153e648 ff65159 8d44cf2 ff65159 153e648 ff65159 8d44cf2 ff65159 6c89df0 5f696ed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
license: odc-by
size_categories:
- 10K<n<100K
dataset_info:
- config_name: clean
features:
- name: meta
struct:
- name: publication_date
dtype: int64
- name: short_book_title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: first_25k
dtype: string
- name: label
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 11298462885
num_examples: 26372
download_size: 7022857617
dataset_size: 11298462885
- config_name: default
features:
- name: meta
struct:
- name: publication_date
dtype: int64
- name: short_book_title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10580548205.687407
num_examples: 26372
download_size: 6635583644
dataset_size: 10580548205.687407
- config_name: embeddings-jina-base
features:
- name: meta
struct:
- name: publication_date
dtype: int64
- name: short_book_title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 10801330292
num_examples: 26372
download_size: 6772846092
dataset_size: 10801330292
configs:
- config_name: clean
data_files:
- split: train
path: clean/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: embeddings-jina-base
data_files:
- split: train
path: embeddings-jina-base/train-*
---
# Dataset Card for "rp_books-en"
The `default` config:
```python
Dataset({
features: ['meta', 'text'],
num_rows: 26372
})
```
## token count
GPT-4 tiktoken token count:
```
token_count
count 2.637200e+04
mean 1.009725e+05
std 1.161315e+05
min 3.811000e+03
25% 3.752750e+04
50% 7.757950e+04
75% 1.294130e+05
max 8.687685e+06
```
Total count: 2662.85 M tokens
|