rp_books-en / README.md
pszemraj's picture
add embeddings in fp16
8d44cf2 verified
|
raw
history blame
1.89 kB
metadata
license: odc-by
size_categories:
  - 10K<n<100K
dataset_info:
  - config_name: clean
    features:
      - name: meta
        struct:
          - name: publication_date
            dtype: int64
          - name: short_book_title
            dtype: string
          - name: url
            dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 10639195236
        num_examples: 26372
    download_size: 6599508967
    dataset_size: 10639195236
  - config_name: default
    features:
      - name: meta
        struct:
          - name: publication_date
            dtype: int64
          - name: short_book_title
            dtype: string
          - name: url
            dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 10580548205.687407
        num_examples: 26372
    download_size: 6635583644
    dataset_size: 10580548205.687407
  - config_name: embeddings-jina-base
    features:
      - name: meta
        struct:
          - name: publication_date
            dtype: int64
          - name: short_book_title
            dtype: string
          - name: url
            dtype: string
      - name: text
        dtype: string
      - name: embedding
        sequence: float64
    splits:
      - name: train
        num_bytes: 10801330292
        num_examples: 26372
    download_size: 6772846092
    dataset_size: 10801330292
configs:
  - config_name: clean
    data_files:
      - split: train
        path: clean/train-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
  - config_name: embeddings-jina-base
    data_files:
      - split: train
        path: embeddings-jina-base/train-*

Dataset Card for "rp_books-en"

The default config:

Dataset({
    features: ['meta', 'text'],
    num_rows: 26372
})

token count

GPT-4 tiktoken token count:

        token_count
count  2.637200e+04
mean   1.009725e+05
std    1.161315e+05
min    3.811000e+03
25%    3.752750e+04
50%    7.757950e+04
75%    1.294130e+05
max    8.687685e+06

Total count: 2662.85 M tokens