Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      https://storage.googleapis.com/vietai_public/best_vi_translation/v2/train.en
Traceback:    Traceback (most recent call last):
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 407, in _info
                  await _file_info(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 788, in _file_info
                  r.raise_for_status()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 404, message='Not Found', url=URL('https://storage.googleapis.com/vietai_public/best_vi_translation/v2/train.en')
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/first_rows/src/first_rows/response.py", line 375, in get_first_rows_response
                  rows = get_rows(
                File "/src/workers/first_rows/src/first_rows/utils.py", line 136, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/first_rows/src/first_rows/response.py", line 84, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 782, in __iter__
                  for key, example in self._iter():
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 772, in _iter
                  yield from ex_iterable
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 142, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/albertvillanova--mtet/639cd19db0e85878d0855f32034605f4b4202e955189dbfd38f40b21707daa3a/mtet.py", line 62, in _generate_examples
                  with open(data_paths["en"], encoding="utf-8") as f_en, open(data_paths["vi"], encoding="utf-8") as f_vi:
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 469, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 135, in open
                  return self.__enter__()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1094, in open
                  f = self._open(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 346, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 420, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://storage.googleapis.com/vietai_public/best_vi_translation/v2/train.en

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: The task_categories "conditional-text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, conversational, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, visual-question-answering, document-question-answering, zero-shot-image-classification, other
YAML Metadata Warning: The task_ids "machine-translation" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-generation, dialogue-modeling, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering

Dataset Card for MTet

Dataset Summary

MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles, literature, news, and poems.

This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality English-Vietnamese sentence pairs on various domains.

Supported Tasks and Leaderboards

  • Machine Translation

Languages

The languages in the dataset are:

  • Vietnamese (vi)
  • English (en)

Dataset Structure

Data Instances

{
  'translation': {
    'en': 'He said that existing restrictions would henceforth be legally enforceable, and violators would be fined.',
    'vi': 'Ông nói những biện pháp hạn chế hiện tại sẽ được nâng lên thành quy định pháp luật, và những ai vi phạm sẽ chịu phạt.'
  }
}

Data Fields

  • translation:
    • en: Parallel text in English.
    • vi: Parallel text in Vietnamese.

Data Splits

The dataset is in a single "train" split.

train
Number of examples 4163853

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Citation Information

@article{mTet2022,
    author  = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
    title   = {MTet: Multi-domain Translation for English and Vietnamese},
    journal = {https://github.com/vietai/mTet},
    year    = {2022},
}

Contributions

Thanks to @albertvillanova for adding this dataset.

Downloads last month
4
Edit dataset card
Evaluate models HF Leaderboard