File size: 581 Bytes
bb46336
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  splits:
  - name: train
    num_bytes: 49605042096
    num_examples: 48253932
  - name: valid
    num_bytes: 595216112
    num_examples: 579004
  download_size: 24336775144
  dataset_size: 50200258208
---
# Dataset Card for "turkish_corpus_tokenized"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)