Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,35 @@ dataset_info:
|
|
30 |
num_examples: 22948
|
31 |
download_size: 50953604
|
32 |
dataset_size: 73937561
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
---
|
34 |
-
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
num_examples: 22948
|
31 |
download_size: 50953604
|
32 |
dataset_size: 73937561
|
33 |
+
license: cc-by-sa-4.0
|
34 |
+
task_categories:
|
35 |
+
- text2text-generation
|
36 |
+
language:
|
37 |
+
- en
|
38 |
+
- ja
|
39 |
+
pretty_name: Simplifyingmt
|
40 |
---
|
41 |
+
## SimplifyingMT
|
42 |
|
43 |
+
## Dataset Description
|
44 |
+
-Repository: [https://github.com/nttcslab-nlp/SimplifyingMT_ACL24](https://github.com/nttcslab-nlp/SimplifyingMT_ACL24)
|
45 |
+
-Papre: to appear
|
46 |
+
|
47 |
+
## Paper
|
48 |
+
|
49 |
+
Oshika et al., Simplifying Translations for Children: Iterative Simplification Considering Age of Acquisition with LLMs, Findings of ACL 2024
|
50 |
+
|
51 |
+
## Abstract
|
52 |
+
|
53 |
+
In recent years, neural machine translation (NMT) has been widely used in everyday life.
|
54 |
+
However, the current NMT lacks a mechanism to adjust the difficulty level of translations to match the user's language level.
|
55 |
+
Additionally, due to the bias in the training data for NMT, translations of simple source sentences are often produced with complex words.
|
56 |
+
In particular, this could pose a problem for children, who may not be able to understand the meaning of the translations correctly.
|
57 |
+
In this study, we propose a method that replaces words with high Age of Acquisitions (AoA) in translations with simpler words to match the translations to the user's level.
|
58 |
+
We achieve this by using large language models (LLMs), providing a triple of a source sentence, a translation, and a target word to be replaced.
|
59 |
+
We create a benchmark dataset using back-translation on Simple English Wikipedia.
|
60 |
+
The experimental results obtained from the dataset show that our method effectively replaces high-AoA words with lower-AoA words and, moreover, can iteratively replace most of the high-AoA words while still maintaining high BLEU and COMET scores.
|
61 |
+
|
62 |
+
## License
|
63 |
+
Simple-English-Wikipedia is distributed under the CC-BY-SA 4.0 license.
|
64 |
+
This dataset follows suit and is distributed under the CC-BY-SA 4.0 license.
|