julien-c HF staff commited on
Commit
453b129
1 Parent(s): 441582d

Add description to card metadata

Browse files

This metric wrap the official scoring script for version 2 of the Stanford Question
Answering Dataset (SQuAD).

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by
crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span,
from the corresponding reading passage, or the question might be unanswerable.

SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions
written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but also
determine when no answer is supported by the paragraph and abstain from answering.

Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: SQuAD v2
3
- emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
@@ -8,10 +8,35 @@ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
- - evaluate
12
- - metric
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # Metric Card for SQuAD v2
16
 
17
  ## Metric description
 
1
  ---
2
  title: SQuAD v2
3
+ emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
 
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ This metric wrap the official scoring script for version 2 of the Stanford
15
+ Question
16
+
17
+ Answering Dataset (SQuAD).
18
+
19
+
20
+ Stanford Question Answering Dataset (SQuAD) is a reading comprehension
21
+ dataset, consisting of questions posed by
22
+
23
+ crowdworkers on a set of Wikipedia articles, where the answer to every
24
+ question is a segment of text, or span,
25
+
26
+ from the corresponding reading passage, or the question might be unanswerable.
27
 
28
+
29
+ SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000
30
+ unanswerable questions
31
+
32
+ written adversarially by crowdworkers to look similar to answerable ones.
33
+
34
+ To do well on SQuAD2.0, systems must not only answer questions when possible,
35
+ but also
36
+
37
+ determine when no answer is supported by the paragraph and abstain from
38
+ answering.
39
+ ---
40
  # Metric Card for SQuAD v2
41
 
42
  ## Metric description