ljvmiranda921 commited on
Commit
3281e02
β€’
1 Parent(s): 3f7dab9

Update dataset card

Browse files
Files changed (2) hide show
  1. README.md +40 -7
  2. project.yml +20 -9
README.md CHANGED
@@ -1,16 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) -->
2
 
3
  # πŸͺ spaCy Project: Dataset builder to HuggingFace Hub
4
 
5
- This project contains utility scripts for uploading a dataset to HuggingFace
6
- Hub. We want to separate the spaCy dependencies from the loading script, so
7
- we're parsing the spaCy files independently.
8
 
9
- The process goes like this: we download the raw corpus from Google Cloud
10
- Storage (GCS), convert the spaCy files into a readable IOB format, and parse
11
- that using our loading script (i.e., `tlunified-ner.py`).
 
 
 
 
12
 
13
- We're also shipping the IOB file so that it's easier to access.
 
 
 
 
 
 
 
14
 
15
 
16
  ## πŸ“‹ project.yml
@@ -30,6 +52,17 @@ Commands are only re-run if their inputs have changed.
30
  | `setup-data` | Prepare the Tagalog corpora used for training various spaCy components |
31
  | `upload-to-hf` | Upload dataset to HuggingFace Hub |
32
 
 
 
 
 
 
 
 
 
 
 
 
33
  ### πŸ—‚ Assets
34
 
35
  The following assets are defined by the project. They can
 
1
+ ---
2
+ license: gpl-3.0
3
+ task_categories:
4
+ - token-classification
5
+ language:
6
+ - tl
7
+ size_categories:
8
+ - 1K<n<10K
9
+ pretty_name: TLUnified-NER
10
+ tags:
11
+ - low-resource
12
+ - named-entity-recognition
13
+ ---
14
+
15
  <!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) -->
16
 
17
  # πŸͺ spaCy Project: Dataset builder to HuggingFace Hub
18
 
 
 
 
19
 
20
+ ## Dataset Description
21
+
22
+ This dataset contains the annotated TLUnified corpora from Cruz and Cheng
23
+ (2021). It consists of a curated sample of around 7,000 documents for the
24
+ named entity recognition (NER) task. The majority of the corpus are news
25
+ reports in Tagalog, resembling the domain of the original ConLL 2003. There
26
+ are three entity types: Person (PER), Organization (ORG), and Location (LOC).
27
 
28
+ ## About this repository
29
+
30
+ This repository is a [spaCy project](https://spacy.io/usage/projects) for
31
+ converting the annotated spaCy files into IOB. The process goes like this: we
32
+ download the raw corpus from Google Cloud Storage (GCS), convert the spaCy
33
+ files into a readable IOB format, and parse that using our loading script
34
+ (i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's
35
+ easier to access.
36
 
37
 
38
  ## πŸ“‹ project.yml
 
52
  | `setup-data` | Prepare the Tagalog corpora used for training various spaCy components |
53
  | `upload-to-hf` | Upload dataset to HuggingFace Hub |
54
 
55
+ ### ⏭ Workflows
56
+
57
+ The following workflows are defined by the project. They
58
+ can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run)
59
+ and will run the specified commands in order. Commands are only re-run if their
60
+ inputs have changed.
61
+
62
+ | Workflow | Steps |
63
+ | --- | --- |
64
+ | `all` | `setup-data` &rarr; `upload-to-hf` |
65
+
66
  ### πŸ—‚ Assets
67
 
68
  The following assets are defined by the project. They can
project.yml CHANGED
@@ -1,14 +1,20 @@
1
- title: "Dataset builder to HuggingFace Hub"
2
  description: |
3
- This project contains utility scripts for uploading a dataset to HuggingFace
4
- Hub. We want to separate the spaCy dependencies from the loading script, so
5
- we're parsing the spaCy files independently.
6
 
7
- The process goes like this: we download the raw corpus from Google Cloud
8
- Storage (GCS), convert the spaCy files into a readable IOB format, and parse
9
- that using our loading script (i.e., `tlunified-ner.py`).
 
 
10
 
11
- We're also shipping the IOB file so that it's easier to access.
 
 
 
 
 
 
 
12
 
13
  directories: ["assets", "corpus/spacy", "corpus/iob"]
14
 
@@ -20,6 +26,11 @@ assets:
20
  description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
21
  url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
22
 
 
 
 
 
 
23
  commands:
24
  - name: "setup-data"
25
  help: "Prepare the Tagalog corpora used for training various spaCy components"
@@ -35,7 +46,7 @@ commands:
35
  - name: "upload-to-hf"
36
  help: "Upload dataset to HuggingFace Hub"
37
  script:
38
- - ls
39
  deps:
40
  - corpus/iob/train.iob
41
  - corpus/iob/dev.iob
 
1
+ title: "TLUnified-NER Corpus"
2
  description: |
 
 
 
3
 
4
+ This dataset contains the annotated TLUnified corpora from Cruz and Cheng
5
+ (2021). It consists of a curated sample of around 7,000 documents for the
6
+ named entity recognition (NER) task. The majority of the corpus are news
7
+ reports in Tagalog, resembling the domain of the original ConLL 2003. There
8
+ are three entity types: Person (PER), Organization (ORG), and Location (LOC).
9
 
10
+ ### About this repository
11
+
12
+ This repository is a [spaCy project](https://spacy.io/usage/projects) for
13
+ converting the annotated spaCy files into IOB. The process goes like this: we
14
+ download the raw corpus from Google Cloud Storage (GCS), convert the spaCy
15
+ files into a readable IOB format, and parse that using our loading script
16
+ (i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's
17
+ easier to access.
18
 
19
  directories: ["assets", "corpus/spacy", "corpus/iob"]
20
 
 
26
  description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
27
  url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
28
 
29
+ workflows:
30
+ all:
31
+ - "setup-data"
32
+ - "upload-to-hf"
33
+
34
  commands:
35
  - name: "setup-data"
36
  help: "Prepare the Tagalog corpora used for training various spaCy components"
 
46
  - name: "upload-to-hf"
47
  help: "Upload dataset to HuggingFace Hub"
48
  script:
49
+ - git push
50
  deps:
51
  - corpus/iob/train.iob
52
  - corpus/iob/dev.iob