Datasets:
Tasks:
Text2Text Generation
Formats:
parquet
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -51,7 +51,7 @@ size_categories:
|
|
51 |
|
52 |
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
|
53 |
|
54 |
-
Since the [original WikiSplit](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
|
55 |
|
56 |
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
|
57 |
|
|
|
51 |
|
52 |
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
|
53 |
|
54 |
+
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
|
55 |
|
56 |
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
|
57 |
|