metadata
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 144640000
num_examples: 80000
- name: validation
num_bytes: 14464000
num_examples: 8000
- name: test
num_bytes: 1446400
num_examples: 800
download_size: 24687721
dataset_size: 160550400
Dataset Card for "BertTokenizer_THUCNews_10000_to_lm_datasets"
选取seamew/THUCNewsText数据集10000条,微调IDEA-CCNL/Wenzhong2.0-GPT2-110M-BertTokenizer-chinese模型