Proyag commited on
Commit
2854848
1 Parent(s): cf92055

Dataset card

Browse files
Files changed (1) hide show
  1. README.md +165 -1
README.md CHANGED
@@ -294,4 +294,168 @@ language:
294
  size_categories:
295
  - 100M<n<1B
296
  license: cc0-1.0
297
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
294
  size_categories:
295
  - 100M<n<1B
296
  license: cc0-1.0
297
+ pretty_name: ParaCrawl_Context
298
+ ---
299
+ # Dataset Card for ParaCrawl_Context
300
+
301
+ <!-- Provide a quick summary of the dataset. -->
302
+
303
+ This is a dataset for document-level machine translation introduced in the ACL 2024 paper **Document-Level Machine Translation with Large-Scale Public Parallel Data**. It is a dataset consisting of parallel sentence pairs from the [ParaCrawl](https://paracrawl.eu/) dataset along with corresponding preceding context extracted from the webpages the sentences were crawled from.
304
+
305
+ ## Dataset Details
306
+
307
+ ### Dataset Description
308
+
309
+ <!-- Provide a longer summary of what this dataset is. -->
310
+ This dataset adds document-level context to parallel corpora released by [ParaCrawl](https://paracrawl.eu/). This is useful for training document-level (context-aware) machine translation models, for which very few large-scale datasets exist in public. While the ParaCrawl project released large-scale parallel corpora at the sentence level, they did not preserve document context from the webpages they were originally extracted from. We used additional data sources to retrieve the contexts from the original web text, and thus create datasets that can be used to train document-level MT models.
311
+
312
+ - **Curated by:** Proyag Pal, Alexandra Birch, Kenneth Heafield, from data released by ParaCrawl
313
+ - **Language pairs:** eng-deu, eng-fra, eng-ces, eng-pol, eng-rus
314
+ - **License:** Creative Commons Zero v1.0 Universal (CC0)
315
+ - **Repository:** https://github.com/Proyag/ParaCrawl-Context
316
+ - **Paper:** https://proyag.github.io/files/papers/docmt.pdf
317
+ <!-- Replace with ACL anthology link when available-->
318
+
319
+ ## Uses
320
+
321
+ <!-- Address questions around how the dataset is intended to be used. -->
322
+ This dataset is intended for document-level (context-aware) machine translation.
323
+
324
+ ### Direct Use
325
+
326
+ <!-- This section describes suitable use cases for the dataset. -->
327
+ The ideal usage of this dataset is to use the sentence fields as the source and target translations, and provide the contexts as additional information to a model. This could be done, for example, with a dual-encoder model, where one encoder encodes the source sentence, while the second encoder encodes the source/target context. For an example, see our associated [paper](https://proyag.github.io/files/papers/docmt.pdf).
328
+
329
+ ### Out-of-Scope Use
330
+
331
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
332
+ We expect that this dataset will not work very well for the document-level translation scenario where an entire concatenated document is provided as input and a full translation is produced by the model.
333
+ This is because of how the data was extracted - by matching sentences to their originating URLs and extracting the preceding context from - which means:
334
+
335
+ * There is no guarantee that the preceding context automatically extracted from the originating URL is related to the sentence pair at all.
336
+ * Many sentences came from multiple URLs and thus multiple contexts, so source and target contexts concatenated with source and target sentences may not produce parallel "documents" at all in many cases.
337
+
338
+ However, most examples in our datasets have a unique context, so concatenation might work better if only those examples are used.
339
+
340
+ We have not validated this experimentally, and you are encouraged to try and let us know if it works!
341
+
342
+ ## Dataset Structure
343
+
344
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
345
+ There are three versions of the dataset for each language pair. For a language pair SRC-TRG, they are:
346
+ - `SRC-TRG.src_contexts` - which has preceding context for only the SRC side
347
+ - `SRC-TRG.trg_contexts` - which has preceding context for only the TRG side
348
+ - `SRC-TRG.both_contexts` - which has preceding context for both SRC and TRG sides
349
+
350
+ ### Data Instances
351
+ Example from `eng-deu.both_contexts`:
352
+ ```yaml
353
+ {
354
+ 'eng': 'This stage is 32.8 km long and can be accomplished in 8 hours and 30 minutes.',
355
+ 'eng_context': "Cars <docline> Glungezer chair lift <docline> Patscherkofel cable cars <docline> Service <docline> Classifications of Hiking Routes <docline> Safety in the Mountains <docline> Mountain huts and alpine restaurants <docline> Guides <docline> Sport Shops <docline> Brochures and Maps <docline> Hiking <docline> Free hiking programme <docline> Hiking <docline> Hikes <docline> Long-distance walking trails <docline> Summit Tours <docline> Family hikes <docline> Education and nature trails <docline> Nature reserves <docline> Geocaching <docline> Lifts & cable cars <docline> Axamer Lizum <docline> Innsbruck Nordkette cable cars <docline> Drei-Seen-Bahn in Kühtai <docline> Muttereralm <docline> Oberperfuss Cable Cars <docline> Glungezer chair lift <docline> Patscherkofel cable cars <docline> Service <docline> Classifications of Hiking Routes <docline> Safety in the Mountains <docline> Mountain huts and alpine restaurants <docline> Guides <docline> Sport Shops <docline> Brochures and Maps <docline> today <docline> 12°C/54°F <docline> 70% Fineweather <docline> 2500mm <docline> Frostborder <docline> Tuesday <docline> 17°C/63°F <docline> 50% Fineweather <docline> 3100mm <docline> Frostborder <docline> Wednesday <docline> 18°C/64°F <docline> 40% Fineweather <docline> 3400mm <docline> Frostborder <docline> Forecast <docline> We will see a nice start to the day with sunshine. Clouds will however gradually increase at all levels producing showers in the afternoon. <docline> Tendency <docline> Air pressure will rise over Central Europe and there will be some clearer spells at times. A period of fine weather is not forecast, however. Until Thursday, sunny spells will alternate with showers in the afternoon. <docline> Need help? Contact us! <docline> Innsbruck Tourism <docline> +43 512 / 59 850 <docline> office@innsbruck.info <docline> Mon - Fri: 8.00 am - 5.00 pm <docline> Hotel- and group reservations <docline> +43 512 / 56 2000 <docline> incoming@innsbruck.info <docline> Mon - Fri: 9.00 am - 5.00 pm <docline> Tourist info <docline> +43 512 / 53 56-0 <docline> info@innsbruck.info <docline> Mon - Sat: 9.00 am - 5.00 pm <docline> DE <docline> EN <docline> IT <docline> FR <docline> NL <docline> ES <docline> Hikes <docline> innsbruck.info <docline> Hiking <docline> Hiking <docline> Hikes <docline> Hike with the family, as a couple or alone, short or long, to the summit or on the flat. Search out the correct route for you around Innsbruck. The filter below is here to help. Choose the length of walk, the difficulty level, duration and much more. The results will then deliver tailor-made hiking tips for your holiday. <docline> The Tyrolean section of The Way of St. James through Innsbruck <docline> https://www.innsbruck.info/fileadmin/userdaten/contwise/poi-28003079-jakobsweg_sterbach_in_muehlau_42027886.jpg <docline> Back Overview <docline> Difficulty <docline> easy <docline> Altitude up <docline> 900 METER <docline> Max. route length <docline> 81.4 KM <docline> Best season <docline> April - October <docline> Information/food <docline> GPX Download Route to start <docline> Three of the sections along the main route of The Way of St. James pass through the Innsbruck holiday region. <docline> From Terfens to Innsbruck: <docline> This stage is 24.2 kilometres long and is possible in 6 hours and 15 minutes. The Way of St. James leads from the medieval town of Hall in Tirol via the villages of Absam and Thaur, through the market town of Rum and on to the city of Innsbruck. Once in Innsbruck, the route continues to St. James' Cathedral. <docline> From Innsbruck to Pfaffenhofen: <docline>",
356
+ 'deu_context': 'mit Kindern <docline> Webcams <docline> Prospekte <docline> Aktuelle Top-Themen auf Innsbruck.info <docline> Welcome Card <docline> Innsbruck Card <docline> Bräuche im Sommer <docline> Walks to explore <docline> Innsbruck Webcams <docline> Hiking <docline> Bergwanderprogramm <docline> Wandern <docline> Wanderungen <docline> Weitwanderungen <docline> Gipfeltouren <docline> Familienwanderungen <docline> Themen- und Naturlehrpfade <docline> Naturschauplätze <docline> Geocaching <docline> Bergbahnen und Lifte <docline> Axamer Lizum <docline> Innsbrucker Nordkettenbahnen <docline> Dreiseenbahn Kühtai <docline> Muttereralm <docline> Bergbahn Oberperfuss <docline> Glungezerbahn <docline> Patscherkofelbahn <docline> Service <docline> Klassifizierung der Wanderwege <docline> Sicherheit am Berg <docline> Almhütten und Bergrestaurants <docline> Bergführer und Guides <docline> Sportshops <docline> Prospekte und Karten <docline> Hiking <docline> Bergwanderprogramm <docline> Wandern <docline> Wanderungen <docline> Weitwanderungen <docline> Gipfeltouren <docline> Familienwanderungen <docline> Themen- und Naturlehrpfade <docline> Naturschauplätze <docline> Geocaching <docline> Bergbahnen und Lifte <docline> Axamer Lizum <docline> Innsbrucker Nordkettenbahnen <docline> Dreiseenbahn Kühtai <docline> Muttereralm <docline> Bergbahn Oberperfuss <docline> Glungezerbahn <docline> Patscherkofelbahn <docline> Service <docline> Klassifizierung der Wanderwege <docline> Sicherheit am Berg <docline> Almhütten und Bergrestaurants <docline> Bergführer und Guides <docline> Sportshops <docline> Prospekte und Karten <docline> Heute <docline> 18°C <docline> 30% Sonne <docline> 3610mm <docline> Frostgrenze <docline> Dienstag <docline> 17°C <docline> 50% Sonne <docline> 3100mm <docline> Frostgrenze <docline> Mittwoch <docline> 18°C <docline> 40% Sonne <docline> 3400mm <docline> Frostgrenze <docline> Vorhersage <docline> Der Tag beginnt zunächst noch recht beschaulich und die Sonne scheint. Allerdings nimmt die Bewölkung nach und nach in allen Schichten zu und am Nachmittag kommt es dann zu Schauern. <docline> Tendenz <docline> Über Mitteleuropa steigt in der Folge der Luftdruck und zeitweise lockert es auf. Dauerhaftes Schönwetter stellt sich jedoch noch nicht ein: Bis zum Donnerstag gibt es neben Sonne vor allem jeweils nachmittags auch Schauer. <docline> Können wir helfen? Kontaktieren Sie uns! <docline> Innsbruck Tourismus <docline> +43 512 / 59 850 <docline> office@innsbruck.info <docline> Mo - Fr: 8:00 - 17:00 Uhr <docline> Hotel- u. Gruppenreservierung <docline> +43 512 / 56 2000 <docline> incoming@innsbruck.info <docline> Mo - Fr: 9:00 - 17:00 Uhr <docline> Tourismus Information <docline> +43 512 / 53 56-0 <docline> info@innsbruck.info <docline> Mo - Sa: 9:00 - 17:00 Uhr <docline> DE <docline> EN <docline> IT <docline> FR <docline> NL <docline> ES <docline> Wanderungen <docline> innsbruck.info <docline> Wandern <docline> Wandern <docline> Wanderungen <docline> Wandern mit Familie, zu zweit oder solo, weit oder kurz, zum Gipfelkreuz oder entspannt ohne viel Steigung. Suchen Sie sich die passende Wanderung rund um Innsbruck aus. Die Filter oberhalb der Ergebnisliste helfen dabei: Wählen Sie Streckenlänge, Schwierigkeitsgrad, Gehzeit und einiges mehr. Die Ergebnisse darunter liefern maßgeschneiderte Wandertipps für Ihren Urlaub. <docline> Tiroler Jakobsweg durch Innsbruck <docline> https://www.innsbruck.info/fileadmin/userdaten/contwise/poi-28003079-jakobsweg_sterbach_in_muehlau_42027886.jpg <docline> Zurück Zur Übersicht <docline> Schwierigkeit <docline> leicht <docline> Höhenmeter bergauf <docline> 900 METER <docline> Streckenlänge <docline> 81.4 KM <docline> Beste Jahreszeit <docline> April bis Oktober <docline> Mit Einkehrmöglichkeit <docline> GPX Download Route zum Startpunkt <docline> Drei Abschnitte der Hauptroute des Jakobswegs verlaufen durch die Ferienregion Innsbruck. <docline> Von Terfens nach Innsbruck: <docline> In 6 Stunden 15 Minuten sind die 24,2 Kilometer dieses Abschnittes zu schaffen. Von der mittelalterlichen Stadt Hall über Absam und Thaur führt der Jakobsweg durch die Marktgemeinde Rum und weiter nach Innsbruck. Dort angelangt kommt man zum Dom St.Jakob. <docline> Von Innsbruck bis Pfaffenhofen: <docline>',
357
+ 'deu': 'Der Abschnitt ist 32,8 Kilometer lang und in einer Zeit von 8 Stunden und 30 Minuten zu schaffen.'
358
+ }
359
+ ```
360
+
361
+ `eng-deu.src_contexts` will have the `eng`, `eng_context`, and `deu` fields, while `eng-deu.trg_contexts` will have the `eng`, `deu_context`, and `deu` fields.
362
+
363
+ This example only has one context one each side, but there may be one or more alternative contexts separated by `|||` delimiters.
364
+
365
+
366
+ ### Data Fields
367
+ For `SRC-TRG.src_contexts` or `SRC-TRG.trg_contexts`, there are 3 fields:
368
+ - `SRC` - containing the source (English) sentence.
369
+ - `TRG` - containing the target language sentence.
370
+ - `SRC_context` or `TRG_context` - containing the source/target context(s). There may be multiple contexts from multiple webpages separated by the delimiter `|||`. Within each context, line breaks have been replaced with a `<docline>` token.
371
+
372
+ `SRC-TRG.both_contexts` contains 4 fields, since it has both the `SRC_context` and `TRG_context` fields.
373
+
374
+ Remember to replace `SRC` and `TRG` in these examples with the actual language codes in each case. `SRC` is always `eng`, while `TRG` can be `deu`, `fra`, `ces`, `pol`, or `rus`.
375
+
376
+ ### Data Splits
377
+ This dataset does not contain any validation or test sets; all the provided data is intended to be used for training.
378
+
379
+ If you need document-level validation/test sets for use while training models with this data, it should be quite simple to construct them in the same format from other readily available test sets with document information such as [WMT](https://www2.statmt.org/wmt24/translation-task.html) test sets.
380
+
381
+ ## Dataset Creation
382
+
383
+ ### Curation Rationale
384
+
385
+ <!-- Motivation for the creation of this dataset. -->
386
+ While document-level machine translation has inherent advantages over sentence-level approaches, there are very few large-scale document-level parallel corpora available publicly. Parallel corpora constructed from web crawls often discard document context in the process of extracting sentence pairs. ParaCrawl released sentence-level parallel corpora with their source URLs, and separately also released raw web text, so we are able to match the URLs to recover the context that the sentences originally occurred in. This enables us to create large-scale parallel corpora for training document-level machine translation models.
387
+
388
+ ### Source Data
389
+
390
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
391
+ This dataset was extracted entirely from [parallel corpora](https://paracrawl.eu/) and [raw web text](https://paracrawl.eu/moredata) released by ParaCrawl. Please refer to the [ParaCrawl paper](https://aclanthology.org/2020.acl-main.417/) for more information about the source of the data.
392
+
393
+ #### Data Collection and Processing
394
+
395
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
396
+
397
+ To extract the contexts for ParaCrawl sentence pairs, we used the following method (copied from the [paper](https://proyag.github.io/files/papers/docmt.pdf)):
398
+ 1. Extract the source URLs and corresponding sentences from the TMX files from [ParaCrawl release 9](https://paracrawl.eu/releases) (or the bonus release in the case of eng-rus). Each sentence is usually associated with many different source URLs, and we keep all of them.
399
+ 2. Match the extracted URLs with the URLs from all the raw text data and get the corresponding base64-encoded webpage/document, if available.
400
+ 3. Decode the base64 documents and try to match the original sentence. If the sentence is not found in the document, discard the document. Otherwise, keep the 512 tokens preceding the sentence (where a token is anything separated by a space), replace line breaks with a special `<docline>` token, and store it as the document context. Since some very common sentences correspond to huge numbers of source URLs, we keep a maximum of 1000 unique contexts per sentence separated by a delimiter `|||` in the final dataset.
401
+ 4. Finally, we compile three different files per language pair – a dataset with all sentence pairs where we have one or more source contexts (`*.src_contexts`), one with all sentence pairs with target contexts (`*.trg_contexts`), and a third dataset with both contexts (`*.both_contexts`).
402
+
403
+ #### Who are the source data producers?
404
+
405
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
406
+
407
+ See the [ParaCrawl paper](https://aclanthology.org/2020.acl-main.417/).
408
+
409
+ #### Personal and Sensitive Information
410
+
411
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
412
+
413
+ This dataset is constructed from web crawled data, and thus may contain sensitive or harmful data. The ParaCrawl datasets were released after some filtering at the sentence pair level, but please note that the contexts we extracted from the original webpages have not been filtered in any way.
414
+
415
+ ## Bias, Risks, and Limitations
416
+
417
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
418
+ \[This section has been copied from the [paper](https://proyag.github.io/files/papers/docmt.pdf), which you can refer to for details.\]
419
+
420
+ **Relevance of context**: Our work assumes that any extracted text preceding a given sentence on a webpage is relevant “document context” for that sentence. However, it is likely in many cases that the extracted context is unrelated to the sentence, since most webpages are not formatted as a coherent “document”. As a result, the dataset often includes irrelevant context like lists of products, UI elements, or video titles extracted from webpages which will not be directly helpful to document-level translation models.
421
+
422
+ **Unaligned contexts**: For sentences with multiple matching contexts, the source and target contexts may not always be aligned. However, the vast majority of sentence pairs have exactly one source/target context, and should therefore have aligned contexts. We recommend filtering on this basis if aligned contexts are required.
423
+
424
+ **Language coverage**: ParaCrawl was focused on European Union languages with only a few “bonus” releases for other languages. Moreover, most of the corpora were for English-centric language pairs. Due to the high computational requirements to extract these corpora, our work further chose only a subset of these languages, resulting in corpora for only a few European languages, some of them closely related. Given the availability of raw data and tools to extract such corpora for many more languages from all over the world, we hope the community is encouraged to build such resources for a much larger variety of language pairs.
425
+
426
+ **Harmful content**: The main released corpora from ParaCrawl were filtered to remove sensitive content, particularly pornography. Due to pornographic websites typically containing large amounts of machine translated text, this filtering also improved the quality of the resulting corpora. However, when we match sentences with their source URLs, it often happens that an innocuous sentence was extracted from a webpage with harmful content, and this content is present in our document contexts. We may release filtered versions of these corpora in the future, pending further work to filter harmful content at the document level.
427
+
428
+
429
+
430
+ ### Recommendations
431
+
432
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
433
+
434
+ Please be aware that this contains unfiltered data from the internet, and may contain harmful content. For details about the content and limitations of this dataset, read this dataset card as well as [our paper](https://proyag.github.io/files/papers/docmt.pdf) before using the data for anything where the translated content or its usage might be sensitive.
435
+
436
+ ## Citation
437
+
438
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
439
+
440
+ Please cite the paper if you use this dataset.
441
+ Until the ACL Anthology is updated with ACL 2024 papers, you can use the following BibTeX:
442
+
443
+ <!-- Update with ACL Anthology bibtex-->
444
+ ```
445
+ @inproceedings{
446
+ pal-etal-2024-documentlevel,
447
+ title={Document-Level Machine Translation with Large-Scale Public Parallel Corpora},
448
+ author={Proyag Pal and Alexandra Birch and Kenneth Heafield},
449
+ booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
450
+ year={2024},
451
+ url={https://openreview.net/forum?id=kcl1LZlQJi}
452
+ }
453
+ ```
454
+
455
+ ## Dataset Card Authors
456
+
457
+ This dataset card was written by [Proyag Pal](https://proyag.github.io/). The [paper](https://proyag.github.io/files/papers/docmt.pdf) this dataset was created for was written by Proyag Pal, Alexandra Birch, and Kenneth Heafield at the University of Edinburgh.
458
+
459
+ ## Dataset Card Contact
460
+
461
+ If you have any comments or questions, contact [Proyag Pal](mailto:proyag.pal@ed.ac.uk).