weibo / README.md
minskiter's picture
fix(readme.md): fix features description
1e2c321
metadata
license: apache-2.0
dataset_info:
  features:
    - name: text
      sequence: string
    - name: labels
      sequence:
        class_label:
          names:
            '0': O
            '1': B-PER.NAM
            '2': I-PER.NAM
            '3': E-PER.NAM
            '4': S-PER.NAM
            '5': B-ORG.NAM
            '6': I-ORG.NAM
            '7': E-ORG.NAM
            '8': S-ORG.NAM
            '9': B-LOC.NAM
            '10': I-LOC.NAM
            '11': E-LOC.NAM
            '12': S-LOC.NAM
            '13': B-GPE.NAM
            '14': I-GPE.NAM
            '15': E-GPE.NAM
            '16': S-GPE.NAM
            '17': B-PER.NOM
            '18': I-PER.NOM
            '19': E-PER.NOM
            '20': S-PER.NOM
            '21': B-ORG.NOM
            '22': I-ORG.NOM
            '23': E-ORG.NOM
            '24': S-ORG.NOM
            '25': B-LOC.NOM
            '26': I-LOC.NOM
            '27': E-LOC.NOM
            '28': S-LOC.NOM
            '29': B-GPE.NOM
            '30': I-GPE.NOM
            '31': E-GPE.NOM
            '32': S-GPE.NOM
  splits:
    - name: train
      num_bytes: 1095833
      num_examples: 1350
    - name: validation
      num_bytes: 215953
      num_examples: 270
    - name: test
      num_bytes: 220694
      num_examples: 270
  download_size: 217348
  dataset_size: 1532480
language:
  - zh
tags:
  - social
size_categories:
  - 1K<n<10K

How to loading dataset?

from datasets import load_dataset
datasets = load_dataset("minskiter/weibo",save_infos=True)
train,validation,test = datasets['train'],datasets['validation'],datasets['test']
# convert label to str
print(train.features['labels'].feature.int2str(0))

Force Update

from datasets import load_dataset
datasets = load_dataset("minskiter/weibo", download_mode="force_redownload")

CHANGE LOGS

  • 21/7/2023 v1.0.2 Fix data format.
  • 16/7/2023 v1.0.0 Publish weibo data.