winograd_th / README.md
pakphum's picture
Upload dataset
1601cc7 verified
|
raw
history blame
5.37 kB
metadata
language:
  - th
license: cc-by-4.0
size_categories:
  - n<1K
task_categories:
  - multiple-choice
dataset_info:
  features:
    - name: qn
      dtype: int64
    - name: label
      dtype: int64
    - name: pronoun
      dtype: string
    - name: quote
      dtype: string
    - name: source
      dtype: string
    - name: text
      dtype: string
    - name: options
      sequence: string
    - name: pronoun_loc
      dtype: int64
    - name: quote_loc
      dtype: int64
  splits:
    - name: test
      num_bytes: 108114
      num_examples: 285
  download_size: 35831
  dataset_size: 108114
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

A collection of Thai Winograd Schemas

We present a collection of Winograd Schemas in the Thai language. These schemas are adapted from the original set of English Winograd Schemas proposed by Levesque et al., which was based on Ernest Davis's collection.

Dataset Translation

Two professional translators, who were native Thai speakers fluent in English and had experience translating from English to Thai, were hired. In a pilot translation phase, one native speaker translated the first 85 sentences. Based on a qualitative analysis of these 85 sentences, guidelines were provided for a second native speaker to translate the remaining 200 sentences. In total, 285 sentences were translated from English to Thai. Translation guidelines were provided, instructing them to adapt names and contexts to suit the Thai language while preserving the ambiguity and nuances of the original schema. The translators were also asked to mark any translated names and translations they were unsure about, so that the validator in the next step could pay extra attention to those instances. For example, in \autoref{fig:example}, the names Paul and George were changed to Mana and Piti, respectively, adapting the names to better suit the Thai context while preserving the essence of the original content.

Dataset Validation

The translated Winograd Schemas were reviewed with three native Thai speakers, and final adjustments were made to ensure clarity. A validator was provided with the translations and tasked with identifying any potential issues. They were instructed to pay closer attention to the text marked by the translators, which included changes to names and translations that the translators were unsure about. Based on the validator’s feedback, final adjustments to the translations were made, and typographical errors were corrected. Furthermore, the dataset will be made publicly available\footnote{A link to the dataset will be added in the de-anonymized version.}, inviting other native Thai speakers to verify it and suggest any necessary adjustments.

Dataset Structure

Data Instances

Each instance contains a text passage with a designated pronoun and two possible answers indicating which entity in the passage the pronoun represents. An example instance looks like the following:

{
  'qn': 0,
  'label': 0,
  'pronoun': 'พวกเขา',
  'quote': 'พวกเขากลัวความรุนแรง',
  'source': '(Winograd 1972)',
  'text': 'สมาชิกสภาเทศบาลเมืองปฏิเสธใบอนุญาตผู้ชุมนุมเพราะพวกเขากลัวความรุนแรง'
  'options': ('สมาชิกสภาเทศบาลเมือง', 'ผู้ชุมนุม'),
  'pronoun_loc': 48,
  'quote_loc': 48
}

Data Fields

  • qn (int): The number of the question based on winograd_wsc285
  • label (int): The index of the correct option in the options field
  • pronoun (str): The pronoun in the sequence to be resolved
  • quote (str): The substr with the key action or context surrounding the pronoun
  • source (str): A description of the source who contributed the example
  • text (str): The text sequence
  • options (tuple[str]): The two entity options that the pronoun may be referring to
  • pronoun_loc (int): The starting position of the pronoun in the sequence
  • quote_loc (int): The starting position of the quote in the sequence

Data Splits

Only a test split is included.

Evaluation

Model Accuracy in English and Thai

Model Accuracy (English) Accuracy (Thai)
Typhoon 64.56% 54.04%
Claude-3-Haiku 62.81% 51.58%
Claude-3-Sonnet 80.70% 63.51%
Claude-3-Opus 92.63% 77.54%
GPT-3.5 71.93% 51.23%
GPT-4 93.68% 72.28%
Human 90% -%

Table 1: Accuracy vs. Model in English and Thai

Acknowledgement

We extend our gratitude to Chanikarn Inthongpan and Korakoch Rienmek, who translated the schemas into the Thai language. We would also like to thank Sakrapee Namsak and Chonnawit Khumchoo for validating the translated Thai dataset.

Dataset Curated and Maintained by Phakphum Artkaew

Any comments or concerns can be reached at pa2497@nyu.edu