Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
4b0e0e7
1 Parent(s): 9acf4f6

Add information to dataset card (#5)

Browse files

- Add information to dataset card (9d2c9054c678acaf5eb7cfca22e6bce139cf3ce8)
- Delete empty information (4c4de5853b1510957e4a10e7aa2534f0975d217e)

Files changed (1) hide show
  1. README.md +32 -9
README.md CHANGED
@@ -5,8 +5,7 @@ language_creators:
5
  - crowdsourced
6
  language:
7
  - en
8
- license:
9
- - unknown
10
  multilinguality:
11
  - monolingual
12
  size_categories:
@@ -282,14 +281,16 @@ configs:
282
  ## Dataset Description
283
 
284
  - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
285
- - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
286
- - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
287
- - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
288
- - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
289
 
290
  ### Dataset Summary
291
 
292
- [More Information Needed]
 
 
 
293
 
294
  ### Supported Tasks and Leaderboards
295
 
@@ -368,11 +369,33 @@ configs:
368
 
369
  ### Licensing Information
370
 
371
- [More Information Needed]
372
 
373
  ### Citation Information
374
 
375
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
376
 
377
  ### Contributions
378
 
5
  - crowdsourced
6
  language:
7
  - en
8
+ license: odc-by
 
9
  multilinguality:
10
  - monolingual
11
  size_categories:
281
  ## Dataset Description
282
 
283
  - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
284
+ - **Repository:** https://github.com/Websail-NU/CODAH
285
+ - **Paper:** https://aclanthology.org/W19-2008/
286
+ - **Paper:** https://arxiv.org/abs/1904.04365
 
287
 
288
  ### Dataset Summary
289
 
290
+ The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense
291
+ question-answering in the sentence completion style of SWAG. As opposed to other automatically generated
292
+ NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model
293
+ and use this information to design challenging commonsense questions.
294
 
295
  ### Supported Tasks and Leaderboards
296
 
369
 
370
  ### Licensing Information
371
 
372
+ The CODAH dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/
373
 
374
  ### Citation Information
375
 
376
+ ```
377
+ @inproceedings{chen-etal-2019-codah,
378
+ title = "{CODAH}: An Adversarially-Authored Question Answering Dataset for Common Sense",
379
+ author = "Chen, Michael and
380
+ D{'}Arcy, Mike and
381
+ Liu, Alisa and
382
+ Fernandez, Jared and
383
+ Downey, Doug",
384
+ editor = "Rogers, Anna and
385
+ Drozd, Aleksandr and
386
+ Rumshisky, Anna and
387
+ Goldberg, Yoav",
388
+ booktitle = "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}",
389
+ month = jun,
390
+ year = "2019",
391
+ address = "Minneapolis, USA",
392
+ publisher = "Association for Computational Linguistics",
393
+ url = "https://aclanthology.org/W19-2008",
394
+ doi = "10.18653/v1/W19-2008",
395
+ pages = "63--69",
396
+ abstract = "Commonsense reasoning is a critical AI capability, but it is difficult to construct challenging datasets that test common sense. Recent neural question answering systems, based on large pre-trained models of language, have already achieved near-human-level performance on commonsense knowledge benchmarks. These systems do not possess human-level common sense, but are able to exploit limitations of the datasets to achieve human-level scores. We introduce the CODAH dataset, an adversarially-constructed evaluation dataset for testing common sense. CODAH forms a challenging extension to the recently-proposed SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video. To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems. Workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning (in cross-validation). We create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset. We observe a significant gap between human performance, which is 95.3{\%}, and the performance of the best baseline accuracy of 65.3{\%} by the OpenAI GPT model.",
397
+ }
398
+ ```
399
 
400
  ### Contributions
401