File size: 2,443 Bytes
97922d9 f4125b8 97922d9 f4125b8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
dataset_info:
features:
- name: chunk_index
dtype: int64
- name: chunk_text
dtype: string
- name: chunk_tokens
sequence: int64
- name: chunk_token_count
dtype: int64
- name: id
dtype: string
- name: url
dtype: string
- name: score
dtype: float64
- name: dump
dtype: string
- name: embedding
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 296035820712
num_examples: 25504378
download_size: 215649217827
dataset_size: 296035820712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
pretty_name: FineWeb-edu 10BT Sample embedded with nomic-text-v1.5
size_categories:
- 10M<n<100M
---
# FineWeb-edu 10BT Sample embedded with nomic-text-v1.5
The [FineWeb-edu 10BT sample](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/tree/main/sample/10BT) was first chunked into 500 tokens (using bert-base-uncased) with 10% overlap resulting in 25 million rows and 10.5BT.
The chunks were then embedded using [nomic-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).
## Dataset Details
### Dataset Description
- **Curated by:** Ian @enjalot Johnson
- **Funded by:** Latent Interfaces
- **License:** Apache license 2.0
### Dataset Sources
- **Repository:** https://github.com/enjalot/fineweb-modal
## Uses
### Direct Use
The dataset was embedded with the `clustering: ` prefix, so the main usecase is clustering and feature extraction.
The motivation for making the dataset is to create training data for an [SAE to identify features](https://transformer-circuits.pub/2024/scaling-monosemanticity) in nomic-text-v1.5.
## Dataset Structure
The columns of the dataset are:
- id: the document id in fineweb-edu
- url: the url of the document in fineweb-edu
- score: the score from fineweb-edu
- dump: the dump in fineweb-edu
- chunk_index: which chunk of the original document this is
- chunk_text: the text of the chunk
- chunk_tokens: the tokens tokenized by bert-base-uncased
- chunk_token_count: the number of tokens in this chunk
- embedding: the 768 dimension vector representing the nomic-text-v1.5 embedding
## Dataset Creation
### Curation Rationale
The 10BT Sample is big enough to warrant a scaled up process but manageable enough to be done on a small budget. Using on-demand CPUs and GPUs from modal.com the total cost was ~$60.
|