system HF staff commited on
Commit
45e133a
1 Parent(s): 03b2ffb

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -46,6 +46,7 @@ task_ids:
46
  - [Dataset Curators](#dataset-curators)
47
  - [Licensing Information](#licensing-information)
48
  - [Citation Information](#citation-information)
 
49
 
50
  ## Dataset Description
51
 
@@ -231,3 +232,7 @@ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset
231
  abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at \url{http://quac.ai}.",
232
  }
233
  ```
 
 
 
 
 
46
  - [Dataset Curators](#dataset-curators)
47
  - [Licensing Information](#licensing-information)
48
  - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
 
51
  ## Dataset Description
52
 
 
232
  abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at \url{http://quac.ai}.",
233
  }
234
  ```
235
+
236
+ ### Contributions
237
+
238
+ Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.