File size: 1,953 Bytes
24e2b80 e2f6b48 24e2b80 600f5d0 24e2b80 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
language:
- nl
dataset_info:
features:
- name: tokens
sequence: string
- name: pos
sequence: string
- name: lemma
sequence: string
- name: mw_id
sequence: string
- name: corpus
dtype: string
splits:
- name: train
num_bytes: 14145953
num_examples: 10812
- name: validation
num_bytes: 2231572
num_examples: 1686
- name: test
num_bytes: 1931275
num_examples: 1639
download_size: 3290324
dataset_size: 18308800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Galahad training data
As taken from [Github](https://github.com/INL/galahad-corpus-data/tree/1.0.1/training-data).
## Statistics
The directory [statistics/](https://huggingface.co/datasets/ivdnt/galahad-corpus-data/tree/main/statistics) contains some frequency calculations such as a frequency list of all lemmata and part-of-speechs. Note that multi-words did not get any special treatment in the data structure, so both for lemma and PoS you may see concatenated labels such as `lemma1+lemma2` as a single label for a given token.
### Text and token counts
Total: 14,137 texts, 390,534 tokens
<details>
<summary>Texts and tokens per corpus</summary>
- clvn: 857 texts, 27,654 tokens
- couranten: 800 texts, 29,577 tokens
- dbnl-excerpts-15: 138 texts, 9,611 tokens
- dbnl-excerpts-16: 797 texts, 10,002 tokens
- dbnl-excerpts-17: 256 texts, 11,626 tokens
- dbnl-excerpts-18: 212 texts, 9,986 tokens
- dbnl-excerpts-19: 503 texts, 15,301 tokens
- dictionary-quotations-15: 2,231 texts, 41,012 tokens
- dictionary-quotations-16: 1,826 texts, 45,851 tokens
- dictionary-quotations-17: 1,901 texts, 45,836 tokens
- dictionary-quotations-18: 1,756 texts, 46,182 tokens
- dictionary-quotations-19: 1,540 texts, 34,740 tokens
- letters-as-loot: 1,320 texts, 63,156 tokens
</details>
|