Datasets:
File size: 1,117 Bytes
00d031b f464537 5054556 00d031b f464537 5054556 f464537 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
configs:
- config_name: wiktionary
data_files:
- split: train
path: "wiktionary/train.csv"
- split: validation
path: "wiktionary/valid.csv"
- config_name: web
data_files:
- split: train
path: "web/train.csv"
- split: validation
path: "web/valid.csv"
license: mit
---
# CompoundPiece
Dataset of compound words for the paper [CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models](https://arxiv.org/abs/2305.14214).
Load the balanced dataset of hyphens and non-hyphenated words scraped from the web (used as pretraining data):
```python
load_dataset("benjamin/compoundpiece", "web")
```
Load the dataset of compound and non-compound words (used for fine-tuning):
```python
load_dataset("benjamin/compoundpiece", "wiktionary")
```
# Citation
```
@article{minixhofer2023compoundpiece,
title={CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models},
author={Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2305.14214},
year={2023}
}
```
# License
MIT |