Datasets:
The dataset preview is not available for this split.
Error code: StreamingRowsError Exception: FileNotFoundError Message: https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 417, in _info await _file_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 837, in _file_info r.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 404, message='Not Found', url=URL('https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 584, in compute_first_rows_response rows = get_rows( File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 179, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 235, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/alt/e784a3f2a9f6bdf277940de6cc9d700eab852896cd94aad4233caf26008da9ed/alt.py", line 256, in _generate_examples fin = open(file_path, encoding="utf-8") File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 70, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 495, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 419, in open return open_files( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 272, in open_files fs, fs_token, paths = get_fs_token_paths( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 586, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 58, in __init__ self.fo = fo.__enter__() # the whole instance is a context File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 102, in __enter__ f = self.fs.open(self.path, mode=mode) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1151, in open f = self._open( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 356, in _open size = size or self.info(path, **kwargs)["size"] File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 115, in wrapper return sync(self.loop, func, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 100, in sync raise return_result File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 55, in _runner result[0] = await coro File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 430, in _info raise FileNotFoundError(url) from exc FileNotFoundError: https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for Asian Language Treebank (ALT)
Dataset Summary
The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page.
The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.
Supported Tasks and Leaderboards
Machine Translation, Dependency Parsing
Languages
It supports 13 language:
- Bengali
- English
- Filipino
- Hindi
- Bahasa Indonesia
- Japanese
- Khmer
- Lao
- Malay
- Myanmar (Burmese)
- Thai
- Vietnamese
- Chinese (Simplified Chinese).
Dataset Structure
Data Instances
ALT Parallel Corpus
{
"SNT.URLID": "80188",
"SNT.URLID.SNTID": "1",
"url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
"bg": "[translated sentence]",
"en": "[translated sentence]",
"en_tok": "[translated sentence]",
"fil": "[translated sentence]",
"hi": "[translated sentence]",
"id": "[translated sentence]",
"ja": "[translated sentence]",
"khm": "[translated sentence]",
"lo": "[translated sentence]",
"ms": "[translated sentence]",
"my": "[translated sentence]",
"th": "[translated sentence]",
"vi": "[translated sentence]",
"zh": "[translated sentence]"
}
ALT Treebank
{
"SNT.URLID": "80188",
"SNT.URLID.SNTID": "1",
"url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
"status": "draft/reviewed",
"value": "(S (S (BASENP (NNP Italy)) (VP (VBP have) (VP (VP (VP (VBN defeated) (BASENP (NNP Portugal))) (ADVP (RB 31-5))) (PP (IN in) (NP (BASENP (NNP Pool) (NNP C)) (PP (IN of) (NP (BASENP (DT the) (NN 2007) (NNP Rugby) (NNP World) (NNP Cup)) (PP (IN at) (NP (BASENP (NNP Parc) (FW des) (NNP Princes)) (COMMA ,) (BASENP (NNP Paris) (COMMA ,) (NNP France))))))))))) (PERIOD .))"
}
ALT Myanmar transliteration
{
"en": "CASINO",
"my": [
"ကက်စီနို",
"ကစီနို",
"ကာစီနို",
"ကာဆီနို"
]
}
Data Fields
ALT Parallel Corpus
- SNT.URLID: URL link to the source article listed in URL.txt
- SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from
SNT.URLID
and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language
ALT Treebank
- status: it indicates how a sentence is annotated;
draft
sentences are annotated by one annotater andreviewed
sentences are annotated by two annotater
The annotatation is different from language to language, please see their guildlines for more detail.
Data Splits
train | valid | test | |
---|---|---|---|
# articles | 1698 | 98 | 97 |
# sentences | 18088 | 1000 | 1018 |
Dataset Creation
Curation Rationale
The ALT project was initiated by the National Institute of Information and Communications Technology, Japan (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from
- National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
- University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
- the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
- the Institute for Infocomm Research, Singapore (I2R) for Malay
- the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
- the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
- National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
- University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
- the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
- the Institute for Infocomm Research, Singapore (I2R) for Malay
- the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
- the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
Citation Information
Please cite the following if you make use of the dataset:
Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA.
BibTeX:
@inproceedings{riza2016introduction,
title={Introduction of the asian language treebank},
author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
pages={1--6},
year={2016},
organization={IEEE}
}
Contributions
Thanks to @chameleonTK for adding this dataset.
- Downloads last month
- 789