--- dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: int64 - name: texts dtype: string - name: input_ids sequence: int32 - name: token_type_ids sequence: int8 - name: attention_mask sequence: int8 - name: labels sequence: int64 splits: - name: train num_bytes: 11207397 num_examples: 3048 - name: validation num_bytes: 2956282 num_examples: 762 - name: test num_bytes: 2602477 num_examples: 991 download_size: 2496494 dataset_size: 16766156 --- # Dataset Card for "Variome_tokenized_split_0404_dev" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)