Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
hpprc commited on
Commit
ed13a1f
1 Parent(s): 6cf544f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -46,4 +46,13 @@ language:
46
  pretty_name: WikiSplit++
47
  size_categories:
48
  - 10M<n<100M
49
- ---
 
 
 
 
 
 
 
 
 
 
46
  pretty_name: WikiSplit++
47
  size_categories:
48
  - 10M<n<100M
49
+ ---
50
+
51
+
52
+ Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
53
+
54
+ Since the [original WikiSplit](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
55
+
56
+ For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
57
+
58
+ This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).