Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Yue Chinese
Size:
10K - 100K
License:
metadata
dataset_info:
features:
- name: chars
sequence: string
- name: labels
sequence:
class_label:
names:
'0': D
'1': I
'2': P
'3': S
- name: logits
sequence:
sequence: float32
length: 4
splits:
- name: train
num_bytes: 175352249
num_examples: 46033
- name: validation
num_bytes: 9697876
num_examples: 2557
- name: test
num_bytes: 9693002
num_examples: 2558
download_size: 144555783
dataset_size: 194743127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
language:
- yue
license: mit
task_categories:
- token-classification
This segmentation dataset was generated with AlienKevin/electra-hongkongese-base-hkcancor-multi on R5dwMg/zh-wiki-yue-long.
See https://github.com/AlienKevin/dips for details.