tlunified-ner / project.yml
ljvmiranda921's picture
Implement simple workflow for parsing spaCy files
1b0f91f
raw
history blame
No virus
1.43 kB
title: "Dataset builder to HuggingFace Hub"
description: |
This project contains utility scripts for uploading a dataset to HuggingFace
Hub. We want to separate the spaCy dependencies from the loading script, so
we're parsing the spaCy files independently.
The process goes like this: we download the raw corpus from Google Cloud
Storage (GCS), convert the spaCy files into a readable IOB format, and parse
that using our loading script (i.e., `tlunified-ner.py`).
We're also shipping the IOB file so that it's easier to access.
directories: ["assets", "corpus/spacy", "corpus/iob"]
vars:
version: 1.0
assets:
- dest: assets/corpus.tar.gz
description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
commands:
- name: "setup-data"
help: "Prepare the Tagalog corpora used for training various spaCy components"
script:
- mkdir -p corpus/spacy
- tar -xzvf assets/corpus.tar.gz -C corpus/spacy
- python -m spacy_to_iob corpus/spacy/ corpus/iob/
outputs:
- corpus/iob/train.iob
- corpus/iob/dev.iob
- corpus/iob/test.iob
- name: "upload-to-hf"
help: "Upload dataset to HuggingFace Hub"
script:
- ls
deps:
- corpus/iob/train.iob
- corpus/iob/dev.iob
- corpus/iob/test.iob