--- dataset_info: - config_name: eng-ces.both_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: ces_context dtype: string - name: ces dtype: string splits: - name: train num_bytes: 99249281542 num_examples: 16312023 download_size: 50311612769 dataset_size: 99249281542 - config_name: eng-ces.src_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: ces dtype: string splits: - name: train num_bytes: 55783391633 num_examples: 18718104 download_size: 27949833416 dataset_size: 55783391633 - config_name: eng-ces.trg_contexts features: - name: eng dtype: string - name: ces_context dtype: string - name: ces dtype: string splits: - name: train num_bytes: 67790203254 num_examples: 21000099 download_size: 35682681930 dataset_size: 67790203254 - config_name: eng-deu.both_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: deu_context dtype: string - name: deu dtype: string splits: - name: train num_bytes: 544626482766 num_examples: 92066559 download_size: 287393903524 dataset_size: 544626482766 - config_name: eng-deu.src_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: deu dtype: string splits: - name: train num_bytes: 305555617347 num_examples: 105641972 download_size: 163549986986 dataset_size: 305555617347 - config_name: eng-deu.trg_contexts features: - name: eng dtype: string - name: deu_context dtype: string - name: deu dtype: string splits: - name: train num_bytes: 355001902675 num_examples: 110317948 download_size: 189296787255 dataset_size: 355001902675 - config_name: eng-fra.both_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: fra_context dtype: string - name: fra dtype: string splits: - name: train num_bytes: 426893899212 num_examples: 72236079 download_size: 230871109132 dataset_size: 426893899212 - config_name: eng-fra.src_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: fra dtype: string splits: - name: train num_bytes: 249628324881 num_examples: 83450135 download_size: 137168157896 dataset_size: 249628324881 - config_name: eng-fra.trg_contexts features: - name: eng dtype: string - name: fra_context dtype: string - name: fra dtype: string splits: - name: train num_bytes: 270469945796 num_examples: 86300028 download_size: 146946754213 dataset_size: 270469945796 - config_name: eng-pol.both_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: pol_context dtype: string - name: pol dtype: string splits: - name: train num_bytes: 89716407201 num_examples: 14889498 download_size: 46321869504 dataset_size: 89716407201 - config_name: eng-pol.src_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: pol dtype: string splits: - name: train num_bytes: 49301775564 num_examples: 16803950 download_size: 25270022217 dataset_size: 49301775564 - config_name: eng-pol.trg_contexts features: - name: eng dtype: string - name: pol_context dtype: string - name: pol dtype: string splits: - name: train num_bytes: 59562532908 num_examples: 18395174 download_size: 31681850576 dataset_size: 59562532908 - config_name: eng-rus.both_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: rus_context dtype: string - name: rus dtype: string splits: - name: train num_bytes: 18867292434 num_examples: 2433874 download_size: 9061303586 dataset_size: 18867292434 - config_name: eng-rus.src_contexts features: - name: eng dtype: string - name: eng_context dtype: string - name: rus dtype: string splits: - name: train num_bytes: 9242442932 num_examples: 3104195 download_size: 4903481579 dataset_size: 9242442932 - config_name: eng-rus.trg_contexts features: - name: eng dtype: string - name: rus_context dtype: string - name: rus dtype: string splits: - name: train num_bytes: 14244166125 num_examples: 2813181 download_size: 6539469805 dataset_size: 14244166125 configs: - config_name: eng-ces.both_contexts data_files: - split: train path: both_contexts/eng-ces/train-* - config_name: eng-ces.src_contexts data_files: - split: train path: src_contexts/eng-ces/train-* - config_name: eng-ces.trg_contexts data_files: - split: train path: trg_contexts/eng-ces/train-* - config_name: eng-deu.both_contexts data_files: - split: train path: both_contexts/eng-deu/train-* default: true - config_name: eng-deu.src_contexts data_files: - split: train path: src_contexts/eng-deu/train-* - config_name: eng-deu.trg_contexts data_files: - split: train path: trg_contexts/eng-deu/train-* - config_name: eng-fra.both_contexts data_files: - split: train path: both_contexts/eng-fra/train-* - config_name: eng-fra.src_contexts data_files: - split: train path: src_contexts/eng-fra/train-* - config_name: eng-fra.trg_contexts data_files: - split: train path: trg_contexts/eng-fra/train-* - config_name: eng-pol.both_contexts data_files: - split: train path: both_contexts/eng-pol/train-* - config_name: eng-pol.src_contexts data_files: - split: train path: src_contexts/eng-pol/train-* - config_name: eng-pol.trg_contexts data_files: - split: train path: trg_contexts/eng-pol/train-* - config_name: eng-rus.both_contexts data_files: - split: train path: both_contexts/eng-rus/train-* - config_name: eng-rus.src_contexts data_files: - split: train path: src_contexts/eng-rus/train-* - config_name: eng-rus.trg_contexts data_files: - split: train path: trg_contexts/eng-rus/train-* task_categories: - translation language: - en - de - fr - cs - pl - ru size_categories: - 100M This is a dataset for document-level machine translation introduced in the ACL 2024 paper [**Document-Level Machine Translation with Large-Scale Public Parallel Data**](https://aclanthology.org/2024.acl-long.712/). It is a dataset consisting of parallel sentence pairs from the [ParaCrawl](https://paracrawl.eu/) dataset along with corresponding preceding context extracted from the webpages the sentences were crawled from. ## Dataset Details ### Dataset Description This dataset adds document-level context to parallel corpora released by [ParaCrawl](https://paracrawl.eu/). This is useful for training document-level (context-aware) machine translation models, for which very few large-scale datasets exist in public. While the ParaCrawl project released large-scale parallel corpora at the sentence level, they did not preserve document context from the webpages they were originally extracted from. We used additional data sources to retrieve the contexts from the original web text, and thus create datasets that can be used to train document-level MT models. - **Curated by:** Proyag Pal, Alexandra Birch, Kenneth Heafield, from data released by ParaCrawl - **Language pairs:** eng-deu, eng-fra, eng-ces, eng-pol, eng-rus - **License:** Creative Commons Zero v1.0 Universal (CC0) - **Repository:** https://github.com/Proyag/ParaCrawl-Context - **Paper:** https://aclanthology.org/2024.acl-long.712/ ## Uses This dataset is intended for document-level (context-aware) machine translation. ### Direct Use The ideal usage of this dataset is to use the sentence fields as the source and target translations, and provide the contexts as additional information to a model. This could be done, for example, with a dual-encoder model, where one encoder encodes the source sentence, while the second encoder encodes the source/target context. For an example, see our associated [paper](https://aclanthology.org/2024.acl-long.712/). ### Out-of-Scope Use We expect that this dataset will not work very well for the document-level translation scenario where an entire concatenated document is provided as input and a full translation is produced by the model. This is because of how the data was extracted - by matching sentences to their originating URLs and extracting the preceding context from - which means: * There is no guarantee that the preceding context automatically extracted from the originating URL is related to the sentence pair at all. * Many sentences came from multiple URLs and thus multiple contexts, so source and target contexts concatenated with source and target sentences may not produce parallel "documents" at all in many cases. However, most examples in our datasets have a unique context, so concatenation might work better if only those examples are used. We have not validated this experimentally, and you are encouraged to try and let us know if it works! ## Dataset Structure There are three versions of the dataset for each language pair. For a language pair SRC-TRG, they are: - `SRC-TRG.src_contexts` - which has preceding context for only the SRC side - `SRC-TRG.trg_contexts` - which has preceding context for only the TRG side - `SRC-TRG.both_contexts` - which has preceding context for both SRC and TRG sides ### Data Instances Example from `eng-deu.both_contexts`: ```yaml { 'eng': 'This stage is 32.8 km long and can be accomplished in 8 hours and 30 minutes.', 'eng_context': "Cars Glungezer chair lift Patscherkofel cable cars Service Classifications of Hiking Routes Safety in the Mountains Mountain huts and alpine restaurants Guides Sport Shops Brochures and Maps Hiking Free hiking programme Hiking Hikes Long-distance walking trails Summit Tours Family hikes Education and nature trails Nature reserves Geocaching Lifts & cable cars Axamer Lizum Innsbruck Nordkette cable cars Drei-Seen-Bahn in Kühtai Muttereralm Oberperfuss Cable Cars Glungezer chair lift Patscherkofel cable cars Service Classifications of Hiking Routes Safety in the Mountains Mountain huts and alpine restaurants Guides Sport Shops Brochures and Maps today 12°C/54°F 70% Fineweather 2500mm Frostborder Tuesday 17°C/63°F 50% Fineweather 3100mm Frostborder Wednesday 18°C/64°F 40% Fineweather 3400mm Frostborder Forecast We will see a nice start to the day with sunshine. Clouds will however gradually increase at all levels producing showers in the afternoon. Tendency Air pressure will rise over Central Europe and there will be some clearer spells at times. A period of fine weather is not forecast, however. Until Thursday, sunny spells will alternate with showers in the afternoon. Need help? Contact us! Innsbruck Tourism +43 512 / 59 850 office@innsbruck.info Mon - Fri: 8.00 am - 5.00 pm Hotel- and group reservations +43 512 / 56 2000 incoming@innsbruck.info Mon - Fri: 9.00 am - 5.00 pm Tourist info +43 512 / 53 56-0 info@innsbruck.info Mon - Sat: 9.00 am - 5.00 pm DE EN IT FR NL ES Hikes innsbruck.info Hiking Hiking Hikes Hike with the family, as a couple or alone, short or long, to the summit or on the flat. Search out the correct route for you around Innsbruck. The filter below is here to help. Choose the length of walk, the difficulty level, duration and much more. The results will then deliver tailor-made hiking tips for your holiday. The Tyrolean section of The Way of St. James through Innsbruck https://www.innsbruck.info/fileadmin/userdaten/contwise/poi-28003079-jakobsweg_sterbach_in_muehlau_42027886.jpg Back Overview Difficulty easy Altitude up 900 METER Max. route length 81.4 KM Best season April - October Information/food GPX Download Route to start Three of the sections along the main route of The Way of St. James pass through the Innsbruck holiday region. From Terfens to Innsbruck: This stage is 24.2 kilometres long and is possible in 6 hours and 15 minutes. The Way of St. James leads from the medieval town of Hall in Tirol via the villages of Absam and Thaur, through the market town of Rum and on to the city of Innsbruck. Once in Innsbruck, the route continues to St. James' Cathedral. From Innsbruck to Pfaffenhofen: ", 'deu_context': 'mit Kindern Webcams Prospekte Aktuelle Top-Themen auf Innsbruck.info Welcome Card Innsbruck Card Bräuche im Sommer Walks to explore Innsbruck Webcams Hiking Bergwanderprogramm Wandern Wanderungen Weitwanderungen Gipfeltouren Familienwanderungen Themen- und Naturlehrpfade Naturschauplätze Geocaching Bergbahnen und Lifte Axamer Lizum Innsbrucker Nordkettenbahnen Dreiseenbahn Kühtai Muttereralm Bergbahn Oberperfuss Glungezerbahn Patscherkofelbahn Service Klassifizierung der Wanderwege Sicherheit am Berg Almhütten und Bergrestaurants Bergführer und Guides Sportshops Prospekte und Karten Hiking Bergwanderprogramm Wandern Wanderungen Weitwanderungen Gipfeltouren Familienwanderungen Themen- und Naturlehrpfade Naturschauplätze Geocaching Bergbahnen und Lifte Axamer Lizum Innsbrucker Nordkettenbahnen Dreiseenbahn Kühtai Muttereralm Bergbahn Oberperfuss Glungezerbahn Patscherkofelbahn Service Klassifizierung der Wanderwege Sicherheit am Berg Almhütten und Bergrestaurants Bergführer und Guides Sportshops Prospekte und Karten Heute 18°C 30% Sonne 3610mm Frostgrenze Dienstag 17°C 50% Sonne 3100mm Frostgrenze Mittwoch 18°C 40% Sonne 3400mm Frostgrenze Vorhersage Der Tag beginnt zunächst noch recht beschaulich und die Sonne scheint. Allerdings nimmt die Bewölkung nach und nach in allen Schichten zu und am Nachmittag kommt es dann zu Schauern. Tendenz Über Mitteleuropa steigt in der Folge der Luftdruck und zeitweise lockert es auf. Dauerhaftes Schönwetter stellt sich jedoch noch nicht ein: Bis zum Donnerstag gibt es neben Sonne vor allem jeweils nachmittags auch Schauer. Können wir helfen? Kontaktieren Sie uns! Innsbruck Tourismus +43 512 / 59 850 office@innsbruck.info Mo - Fr: 8:00 - 17:00 Uhr Hotel- u. Gruppenreservierung +43 512 / 56 2000 incoming@innsbruck.info Mo - Fr: 9:00 - 17:00 Uhr Tourismus Information +43 512 / 53 56-0 info@innsbruck.info Mo - Sa: 9:00 - 17:00 Uhr DE EN IT FR NL ES Wanderungen innsbruck.info Wandern Wandern Wanderungen Wandern mit Familie, zu zweit oder solo, weit oder kurz, zum Gipfelkreuz oder entspannt ohne viel Steigung. Suchen Sie sich die passende Wanderung rund um Innsbruck aus. Die Filter oberhalb der Ergebnisliste helfen dabei: Wählen Sie Streckenlänge, Schwierigkeitsgrad, Gehzeit und einiges mehr. Die Ergebnisse darunter liefern maßgeschneiderte Wandertipps für Ihren Urlaub. Tiroler Jakobsweg durch Innsbruck https://www.innsbruck.info/fileadmin/userdaten/contwise/poi-28003079-jakobsweg_sterbach_in_muehlau_42027886.jpg Zurück Zur Übersicht Schwierigkeit leicht Höhenmeter bergauf 900 METER Streckenlänge 81.4 KM Beste Jahreszeit April bis Oktober Mit Einkehrmöglichkeit GPX Download Route zum Startpunkt Drei Abschnitte der Hauptroute des Jakobswegs verlaufen durch die Ferienregion Innsbruck. Von Terfens nach Innsbruck: In 6 Stunden 15 Minuten sind die 24,2 Kilometer dieses Abschnittes zu schaffen. Von der mittelalterlichen Stadt Hall über Absam und Thaur führt der Jakobsweg durch die Marktgemeinde Rum und weiter nach Innsbruck. Dort angelangt kommt man zum Dom St.Jakob. Von Innsbruck bis Pfaffenhofen: ', 'deu': 'Der Abschnitt ist 32,8 Kilometer lang und in einer Zeit von 8 Stunden und 30 Minuten zu schaffen.' } ``` `eng-deu.src_contexts` will have the `eng`, `eng_context`, and `deu` fields, while `eng-deu.trg_contexts` will have the `eng`, `deu_context`, and `deu` fields. This example only has one context one each side, but there may be one or more alternative contexts separated by `|||` delimiters. ### Data Fields For `SRC-TRG.src_contexts` or `SRC-TRG.trg_contexts`, there are 3 fields: - `SRC` - containing the source (English) sentence. - `TRG` - containing the target language sentence. - `SRC_context` or `TRG_context` - containing the source/target context(s). There may be multiple contexts from multiple webpages separated by the delimiter `|||`. Within each context, line breaks have been replaced with a `` token. `SRC-TRG.both_contexts` contains 4 fields, since it has both the `SRC_context` and `TRG_context` fields. Remember to replace `SRC` and `TRG` in these examples with the actual language codes in each case. `SRC` is always `eng`, while `TRG` can be `deu`, `fra`, `ces`, `pol`, or `rus`. ### Data Splits This dataset does not contain any validation or test sets; all the provided data is intended to be used for training. If you need document-level validation/test sets for use while training models with this data, it should be quite simple to construct them in the same format from other readily available test sets with document information such as [WMT](https://www2.statmt.org/wmt24/translation-task.html) test sets. ## Dataset Creation ### Curation Rationale While document-level machine translation has inherent advantages over sentence-level approaches, there are very few large-scale document-level parallel corpora available publicly. Parallel corpora constructed from web crawls often discard document context in the process of extracting sentence pairs. ParaCrawl released sentence-level parallel corpora with their source URLs, and separately also released raw web text, so we are able to match the URLs to recover the context that the sentences originally occurred in. This enables us to create large-scale parallel corpora for training document-level machine translation models. ### Source Data This dataset was extracted entirely from [parallel corpora](https://paracrawl.eu/) and [raw web text](https://paracrawl.eu/moredata) released by ParaCrawl. Please refer to the [ParaCrawl paper](https://aclanthology.org/2020.acl-main.417/) for more information about the source of the data. #### Data Collection and Processing To extract the contexts for ParaCrawl sentence pairs, we used the following method (copied from the [paper](https://aclanthology.org/2024.acl-long.712/)): 1. Extract the source URLs and corresponding sentences from the TMX files from [ParaCrawl release 9](https://paracrawl.eu/releases) (or the bonus release in the case of eng-rus). Each sentence is usually associated with many different source URLs, and we keep all of them. 2. Match the extracted URLs with the URLs from all the raw text data and get the corresponding base64-encoded webpage/document, if available. 3. Decode the base64 documents and try to match the original sentence. If the sentence is not found in the document, discard the document. Otherwise, keep the 512 tokens preceding the sentence (where a token is anything separated by a space), replace line breaks with a special `` token, and store it as the document context. Since some very common sentences correspond to huge numbers of source URLs, we keep a maximum of 1000 unique contexts per sentence separated by a delimiter `|||` in the final dataset. 4. Finally, we compile three different files per language pair – a dataset with all sentence pairs where we have one or more source contexts (`*.src_contexts`), one with all sentence pairs with target contexts (`*.trg_contexts`), and a third dataset with both contexts (`*.both_contexts`). #### Who are the source data producers? See the [ParaCrawl paper](https://aclanthology.org/2020.acl-main.417/). #### Personal and Sensitive Information This dataset is constructed from web crawled data, and thus may contain sensitive or harmful data. The ParaCrawl datasets were released after some filtering at the sentence pair level, but please note that the contexts we extracted from the original webpages have not been filtered in any way. ## Bias, Risks, and Limitations \[This section has been copied from the [paper](https://aclanthology.org/2024.acl-long.712/), which you can refer to for details.\] **Relevance of context**: Our work assumes that any extracted text preceding a given sentence on a webpage is relevant “document context” for that sentence. However, it is likely in many cases that the extracted context is unrelated to the sentence, since most webpages are not formatted as a coherent “document”. As a result, the dataset often includes irrelevant context like lists of products, UI elements, or video titles extracted from webpages which will not be directly helpful to document-level translation models. **Unaligned contexts**: For sentences with multiple matching contexts, the source and target contexts may not always be aligned. However, the vast majority of sentence pairs have exactly one source/target context, and should therefore have aligned contexts. We recommend filtering on this basis if aligned contexts are required. **Language coverage**: ParaCrawl was focused on European Union languages with only a few “bonus” releases for other languages. Moreover, most of the corpora were for English-centric language pairs. Due to the high computational requirements to extract these corpora, our work further chose only a subset of these languages, resulting in corpora for only a few European languages, some of them closely related. Given the availability of raw data and tools to extract such corpora for many more languages from all over the world, we hope the community is encouraged to build such resources for a much larger variety of language pairs. **Harmful content**: The main released corpora from ParaCrawl were filtered to remove sensitive content, particularly pornography. Due to pornographic websites typically containing large amounts of machine translated text, this filtering also improved the quality of the resulting corpora. However, when we match sentences with their source URLs, it often happens that an innocuous sentence was extracted from a webpage with harmful content, and this content is present in our document contexts. We may release filtered versions of these corpora in the future, pending further work to filter harmful content at the document level. ### Recommendations Please be aware that this contains unfiltered data from the internet, and may contain harmful content. For details about the content and limitations of this dataset, read this dataset card as well as [our paper](https://aclanthology.org/2024.acl-long.712/) before using the data for anything where the translated content or its usage might be sensitive. ## Citation Please cite the paper if you use this dataset. Until the ACL Anthology is updated with ACL 2024 papers, you can use the following BibTeX: ``` @inproceedings{pal-etal-2024-document, title = "Document-Level Machine Translation with Large-Scale Public Parallel Corpora", author = "Pal, Proyag and Birch, Alexandra and Heafield, Kenneth", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.712", pages = "13185--13197", } ``` ## Dataset Card Authors This dataset card was written by [Proyag Pal](https://proyag.github.io/). The [paper](https://aclanthology.org/2024.acl-long.712/) this dataset was created for was written by Proyag Pal, Alexandra Birch, and Kenneth Heafield at the University of Edinburgh. ## Dataset Card Contact If you have any comments or questions, contact [Proyag Pal](mailto:proyag.pal@ed.ac.uk).