--- configs: - config_name: wiktionary data_files: - split: train path: "wiktionary/train.csv" - split: validation path: "wiktionary/valid.csv" - config_name: web data_files: - split: train path: "web/train.csv" - split: validation path: "web/valid.csv" license: mit --- # CompoundPiece Dataset of compound words for the paper [CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models](https://arxiv.org/abs/2305.14214). Load the balanced dataset of hyphens and non-hyphenated words scraped from the web (used as pretraining data): ```python load_dataset("benjamin/compoundpiece", "web") ``` Load the dataset of compound and non-compound words (used for fine-tuning): ```python load_dataset("benjamin/compoundpiece", "wiktionary") ``` # Citation ``` @article{minixhofer2023compoundpiece, title={CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models}, author={Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan}, journal={arXiv preprint arXiv:2305.14214}, year={2023} } ``` # License MIT