Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -52,6 +52,14 @@ This dataset is the HuggingFace version of WikiSplit++.
|
|
52 |
WikiSplit++ enhances the original WikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original WikiSplit.
|
53 |
The preprocessed WikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/wikisplit).
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
## Usage
|
56 |
|
57 |
```python
|
|
|
52 |
WikiSplit++ enhances the original WikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original WikiSplit.
|
53 |
The preprocessed WikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/wikisplit).
|
54 |
|
55 |
+
|
56 |
+
## Dataset Description
|
57 |
+
|
58 |
+
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
|
59 |
+
- **Paper:** https://arxiv.org/abs/2404.09002
|
60 |
+
- **Point of Contact:** [Hayato Tsukagoshi](mailto:tsukagoshi.hayato.r2@s.mail.nagoya-u.ac.jp)
|
61 |
+
|
62 |
+
|
63 |
## Usage
|
64 |
|
65 |
```python
|