MASSIVE / README.md
qanastek's picture
Add data loader for Amazon Massive corpus
502c746
|
raw
history blame
No virus
28.1 kB
metadata
annotations_creators:
  - machine-generated
  - expert-generated
language_creators:
  - found
languages:
  - af-ZA
  - am-ET
  - ar-SA
  - az-AZ
  - bn-BD
  - cy-GB
  - da-DK
  - de-DE
  - el-GR
  - en-US
  - es-ES
  - fa-IR
  - fi-FI
  - fr-FR
  - he-IL
  - hi-IN
  - hu-HU
  - hy-AM
  - id-ID
  - is-IS
  - it-IT
  - ja-JP
  - jv-ID
  - ka-GE
  - km-KH
  - kn-IN
  - ko-KR
  - lv-LV
  - ml-IN
  - mn-MN
  - ms-MY
  - my-MM
  - nb-NO
  - nl-NL
  - pl-PL
  - pt-PT
  - ro-RO
  - ru-RU
  - sl-SL
  - sq-AL
  - sv-SE
  - sw-KE
  - ta-IN
  - te-IN
  - th-TH
  - tl-PH
  - tr-TR
  - ur-PK
  - vi-VN
  - zh-CN
  - zh-TW
licenses:
  - Copyright Amazon.com Inc. or its affiliates.
multilinguality:
  - af-ZA
  - am-ET
  - ar-SA
  - az-AZ
  - bn-BD
  - cy-GB
  - da-DK
  - de-DE
  - el-GR
  - en-US
  - es-ES
  - fa-IR
  - fi-FI
  - fr-FR
  - he-IL
  - hi-IN
  - hu-HU
  - hy-AM
  - id-ID
  - is-IS
  - it-IT
  - ja-JP
  - jv-ID
  - ka-GE
  - km-KH
  - kn-IN
  - ko-KR
  - lv-LV
  - ml-IN
  - mn-MN
  - ms-MY
  - my-MM
  - nb-NO
  - nl-NL
  - pl-PL
  - pt-PT
  - ro-RO
  - ru-RU
  - sl-SL
  - sq-AL
  - sv-SE
  - sw-KE
  - ta-IN
  - te-IN
  - th-TH
  - tl-PH
  - tr-TR
  - ur-PK
  - vi-VN
  - zh-CN
  - zh-TW
pretty_name: MASSIVE
size_categories:
  - 100K<n<1M
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - intent-classification
  - multi-class-classification
  - natural-language-understanding

MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages

Table of Contents

Dataset Description

Dataset Summary

MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.

Name # Lang Utt/Lang Domains Intents Slots
MASSIVE 51 19,521 18 60 55
SLURP (Bastianelli et al., 2020) 1 16,521 18 60 55
NLU Evaluation Data (Liu et al., 2019) 1 25,716 18 54 56
Airline Travel Information System (ATIS) (Price, 1990) 1 5,871 1 26 129
ATIS with Hindi and Turkish (Upadhyay et al., 2018) 3 1,315-5,871 1 26 129
MultiATIS++ (Xu et al., 2020) 9 1,422-5,897 1 21-26 99-140
Snips (Coucke et al., 2018) 1 14,484 - 7 53
Snips with French (Saade et al., 2019) 2 4,818 2 14-15 11-12
Task Oriented Parsing (TOP) (Gupta et al., 2018) 1 44,873 2 25 36
Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) 6 15,195-22,288 11 104-113 72-75
Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) 3 5,083-43,323 3 12 11
Microsoft Dialog Challenge (Li et al., 2018) 1 38,276 3 11 29
Fluent Speech Commands (FSC) (Lugosch et al., 2019) 1 30,043 - 31 -
Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) 1 16,258 4 - 94

Supported Tasks and Leaderboards

The dataset can be used to train a model for natural-language-understanding (NLU) :

  • intent-classification
  • multi-class-classification
  • natural-language-understanding

Languages

The corpora consists of parallel sentences from 51 languages :

  • Afrikaans - South Africa (af-ZA)
  • Amharic - Ethiopia (am-ET)
  • Arabic - Saudi Arabia (ar-SA)
  • Azeri - Azerbaijan (az-AZ)
  • Bengali - Bangladesh (bn-BD)
  • Chinese - China (zh-CN)
  • Chinese - Taiwan (zh-TW)
  • Danish - Denmark (da-DK)
  • German - Germany (de-DE)
  • Greek - Greece (el-GR)
  • English - United States (en-US)
  • Spanish - Spain (es-ES)
  • Farsi - Iran (fa-IR)
  • Finnish - Finland (fi-FI)
  • French - France (fr-FR)
  • Hebrew - Israel (he-IL)
  • Hungarian - Hungary (hu-HU)
  • Armenian - Armenia (hy-AM)
  • Indonesian - Indonesia (id-ID)
  • Icelandic - Iceland (is-IS)
  • Italian - Italy (it-IT)
  • Japanese - Japan (ja-JP)
  • Javanese - Indonesia (jv-ID)
  • Georgian - Georgia (ka-GE)
  • Khmer - Cambodia (km-KH)
  • Korean - Korea (ko-KR)
  • Latvian - Latvia (lv-LV)
  • Mongolian - Mongolia (mn-MN)
  • Malay - Malaysia (ms-MY)
  • Burmese - Myanmar (my-MM)
  • Norwegian - Norway (nb-NO)
  • Dutch - Netherlands (nl-NL)
  • Polish - Poland (pl-PL)
  • Portuguese - Portugal (pt-PT)
  • Romanian - Romania (ro-RO)
  • Russian - Russia (ru-RU)
  • Slovanian - Slovania (sl-SL)
  • Albanian - Albania (sq-AL)
  • Swedish - Sweden (sv-SE)
  • Swahili - Kenya (sw-KE)
  • Hindi - India (hi-IN)
  • Kannada - India (kn-IN)
  • Malayalam - India (ml-IN)
  • Tamil - India (ta-IN)
  • Telugu - India (te-IN)
  • Thai - Thailand (th-TH)
  • Tagalog - Philippines (tl-PH)
  • Turkish - Turkey (tr-TR)
  • Urdu - Pakistan (ur-PK)
  • Vietnamese - Vietnam (vi-VN)
  • Welsh - United Kingdom (cy-GB)

Load the dataset with HuggingFace

from datasets import load_dataset
dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train')
print(dataset)
print(dataset[0])

Dataset Structure

Data Instances (taken from Alexa Github)

{
  "id": "0",
  "locale": "de-DE",
  "partition": "test",
  "scenario": "alarm",
  "intent": "alarm_set",
  "utt": "weck mich diese woche um fünf uhr morgens auf",
  "annot_utt": "weck mich [date : diese woche] um [time : fünf uhr morgens] auf",
  "worker_id": "8",
  "slot_method": [
    {
      "slot": "time",
      "method": "translation"
    },
    {
      "slot": "date",
      "method": "translation"
    }
  ],
  "judgments": [
    {
      "worker_id": "32",
      "intent_score": 1,
      "slots_score": 0,
      "grammar_score": 4,
      "spelling_score": 2,
      "language_identification": "target"
    },
    {
      "worker_id": "8",
      "intent_score": 1,
      "slots_score": 1,
      "grammar_score": 4,
      "spelling_score": 2,
      "language_identification": "target"
    },
    {
      "worker_id": "28",
      "intent_score": 1,
      "slots_score": 1,
      "grammar_score": 4,
      "spelling_score": 2,
      "language_identification": "target"
    }
  ]
}

Data Fields (taken from Alexa Github)

id: maps to the original ID in the SLURP collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.

locale: is the language and country code accoring to ISO-639-1 and ISO-3166.

partition: is either train, dev, or test, according to the original split in SLURP.

scenario: is the general domain, aka "scenario" in SLURP terminology, of an utterance

intent: is the specific intent of an utterance within a domain formatted as {scenario}_{intent}

utt: the raw utterance text without annotations

annot_utt: the text from utt with slot annotations formatted as [{label} : {entity}]

worker_id: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do not map across locales.

slot_method: for each slot in the utterance, whether that slot was a translation (i.e., same expression just in the target language), localization (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or unchanged (i.e., the original en-US slot value was copied over without modification).

judgments: Each judgment collected for the localized utterance has 6 keys. worker_id is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do not map across locales, but are consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.

intent_score : "Does the sentence match the intent?"
  0: No
  1: Yes
  2: It is a reasonable interpretation of the goal

slots_score : "Do all these terms match the categories in square brackets?"
  0: No
  1: Yes
  2: There are no words in square brackets (utterance without a slot)

grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
  0: Completely unnatural (nonsensical, cannot be understood at all)
  1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
  2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
  3: Good enough (easily understood and sounds almost natural in your language)
  4: Perfect (sounds natural in your language)

spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
  0: There are more than 2 spelling errors
  1: There are 1-2 spelling errors
  2: All words are spelled correctly

language_identification : "The following sentence contains words in the following languages (check all that apply)"
  1: target
  2: english
  3: other
  4: target & english
  5: target & other
  6: english & other
  7: target & english & other

Data Splits

Language Train Dev Test
af-ZA 11514 2033 2974
am-ET 11514 2033 2974
ar-SA 11514 2033 2974
az-AZ 11514 2033 2974
bn-BD 11514 2033 2974
cy-GB 11514 2033 2974
da-DK 11514 2033 2974
de-DE 11514 2033 2974
el-GR 11514 2033 2974
en-US 11514 2033 2974
es-ES 11514 2033 2974
fa-IR 11514 2033 2974
fi-FI 11514 2033 2974
fr-FR 11514 2033 2974
he-IL 11514 2033 2974
hi-IN 11514 2033 2974
hu-HU 11514 2033 2974
hy-AM 11514 2033 2974
id-ID 11514 2033 2974
is-IS 11514 2033 2974
it-IT 11514 2033 2974
ja-JP 11514 2033 2974
jv-ID 11514 2033 2974
ka-GE 11514 2033 2974
km-KH 11514 2033 2974
kn-IN 11514 2033 2974
ko-KR 11514 2033 2974
lv-LV 11514 2033 2974
ml-IN 11514 2033 2974
mn-MN 11514 2033 2974
ms-MY 11514 2033 2974
my-MM 11514 2033 2974
nb-NO 11514 2033 2974
nl-NL 11514 2033 2974
pl-PL 11514 2033 2974
pt-PT 11514 2033 2974
ro-RO 11514 2033 2974
ru-RU 11514 2033 2974
sl-SL 11514 2033 2974
sq-AL 11514 2033 2974
sv-SE 11514 2033 2974
sw-KE 11514 2033 2974
ta-IN 11514 2033 2974
te-IN 11514 2033 2974
th-TH 11514 2033 2974
tl-PH 11514 2033 2974
tr-TR 11514 2033 2974
ur-PK 11514 2033 2974
vi-VN 11514 2033 2974
zh-CN 11514 2033 2974
zh-TW 11514 2033 2974

Dataset Creation

Source Data

Who are the source language producers?

The corpus has been produced and uploaded by Amazon Alexa.

Personal and Sensitive Information

The corpora is free of personal or sensitive information.

Additional Information

Dataset Curators

Hugging Face: Labrak Yanis (Not affiliated with the original corpus)

MASSIVE: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.

SLURP: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.

Licensing Information

Copyright Amazon.com Inc. or its affiliates.

Copyright and license details for the data and modified code can be found in NOTICE.md.

License for massive repo and code, Apache 2.0:

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

Citation Information

Please cite the following paper when using this dataset.

@misc{fitzgerald2022massive,
      title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages}, 
      author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
      year={2022},
      eprint={2204.08582},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@inproceedings{bastianelli-etal-2020-slurp,
    title = "{SLURP}: A Spoken Language Understanding Resource Package",
    author = "Bastianelli, Emanuele  and
      Vanzo, Andrea  and
      Swietojanski, Pawel  and
      Rieser, Verena",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.588",
    doi = "10.18653/v1/2020.emnlp-main.588",
    pages = "7252--7262",
    abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}