File size: 614 Bytes
3dec489
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
dataset_info:
  config_name: gpt-4
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: token_length
    dtype: int64
  - name: text_length
    dtype: int64
  splits:
  - name: train
    num_bytes: 324653709
    num_examples: 15000
  download_size: 187754656
  dataset_size: 324653709
configs:
- config_name: gpt-4
  data_files:
  - split: train
    path: gpt-4/train-*
---
# Dataset Card for "wikipedia_token"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)