enjalot's picture
Update README.md
f4125b8 verified
metadata
dataset_info:
  features:
    - name: chunk_index
      dtype: int64
    - name: chunk_text
      dtype: string
    - name: chunk_tokens
      sequence: int64
    - name: chunk_token_count
      dtype: int64
    - name: id
      dtype: string
    - name: url
      dtype: string
    - name: score
      dtype: float64
    - name: dump
      dtype: string
    - name: embedding
      sequence: float64
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 296035820712
      num_examples: 25504378
  download_size: 215649217827
  dataset_size: 296035820712
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
pretty_name: FineWeb-edu 10BT Sample embedded with nomic-text-v1.5
size_categories:
  - 10M<n<100M

FineWeb-edu 10BT Sample embedded with nomic-text-v1.5

The FineWeb-edu 10BT sample was first chunked into 500 tokens (using bert-base-uncased) with 10% overlap resulting in 25 million rows and 10.5BT. The chunks were then embedded using nomic-text-v1.5.

Dataset Details

Dataset Description

  • Curated by: Ian @enjalot Johnson
  • Funded by: Latent Interfaces
  • License: Apache license 2.0

Dataset Sources

Uses

Direct Use

The dataset was embedded with the clustering: prefix, so the main usecase is clustering and feature extraction. The motivation for making the dataset is to create training data for an SAE to identify features in nomic-text-v1.5.

Dataset Structure

The columns of the dataset are:

  • id: the document id in fineweb-edu
  • url: the url of the document in fineweb-edu
  • score: the score from fineweb-edu
  • dump: the dump in fineweb-edu
  • chunk_index: which chunk of the original document this is
  • chunk_text: the text of the chunk
  • chunk_tokens: the tokens tokenized by bert-base-uncased
  • chunk_token_count: the number of tokens in this chunk
  • embedding: the 768 dimension vector representing the nomic-text-v1.5 embedding

Dataset Creation

Curation Rationale

The 10BT Sample is big enough to warrant a scaled up process but manageable enough to be done on a small budget. Using on-demand CPUs and GPUs from modal.com the total cost was ~$60.