Datasets:

File size: 1,958 Bytes
fb5c488
 
 
 
 
 
 
36d2491
fb5c488
 
 
 
3def9b6
 
fb5c488
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36d2491
3def9b6
 
fc94098
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# ParaDocs

Data availability limits the scope of any given task. 
In machine translation, historical models were incapable of handling longer contexts, so the lack of document-level datasets was less noticeable. 
Now, despite the emergence of long-sequence methods, we remain within a sentence-level paradigm and without data to adequately approach context-aware machine translation. 
Most large-scale datasets have been processed through a pipeline that discards document-level metadata. 

[ParaDocs](https://aclanthology.org/2024.findings-acl.589/) is a publicly available dataset that produces parallel annotations for the document-level metadata of three large publicly available corpora (ParaCrawl, Europal, and News Commentary) in many languages.
Using this data and the following scripts, you can download parallel document contexts for the purpose of training context-aware machine translation systems.

If you have questions about this data or use of the following scripts, please do not hesitate to contact the maintainer at rewicks@jhu.edu.

# Quick Start

The scripts to download and process the data can be found [here](https://github.com/rewicks/ParaDocs/):

Clone these scripts:

```
git clone https://github.com/rewicks/ParaDocs.git
```

From this directory, you can stream a specific language and split from huggingface with:

```
paradocs/paradocs-hf --name en-de-strict --minimum_size 2 --frequency_cutoff 100 --lid_cutoff 0.5
```

It may alternatively be faster to download the `*.gz` files of your desired split and then pipe them through the `paradocs/paradocs` file for filtering.

```
zcat data/en-de/strict* | paradocs/paradocs --minimum_size 2 --frequency_cutoff 100 --lid_cutoff 0.5
```

The filtering commandline arguments are explained in more detial in Section 3.2.

## The Paper

If you use this dataset in your research. Please cite our [paper](https://aclanthology.org/2024.findings-acl.589/).


---
license: apache-2.0
---