Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Tagalog
Size:
1K - 10K
ArXiv:
DOI:
License:
File size: 1,429 Bytes
1b0f91f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
title: "Dataset builder to HuggingFace Hub"
description: |
This project contains utility scripts for uploading a dataset to HuggingFace
Hub. We want to separate the spaCy dependencies from the loading script, so
we're parsing the spaCy files independently.
The process goes like this: we download the raw corpus from Google Cloud
Storage (GCS), convert the spaCy files into a readable IOB format, and parse
that using our loading script (i.e., `tlunified-ner.py`).
We're also shipping the IOB file so that it's easier to access.
directories: ["assets", "corpus/spacy", "corpus/iob"]
vars:
version: 1.0
assets:
- dest: assets/corpus.tar.gz
description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
commands:
- name: "setup-data"
help: "Prepare the Tagalog corpora used for training various spaCy components"
script:
- mkdir -p corpus/spacy
- tar -xzvf assets/corpus.tar.gz -C corpus/spacy
- python -m spacy_to_iob corpus/spacy/ corpus/iob/
outputs:
- corpus/iob/train.iob
- corpus/iob/dev.iob
- corpus/iob/test.iob
- name: "upload-to-hf"
help: "Upload dataset to HuggingFace Hub"
script:
- ls
deps:
- corpus/iob/train.iob
- corpus/iob/dev.iob
- corpus/iob/test.iob
|