Spaces:
Running
Running
Update Space (evaluate main: e179b5b8)
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
title: SQuAD v2
|
3 |
-
emoji: 🤗
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
sdk: gradio
|
@@ -8,35 +8,10 @@ sdk_version: 3.0.2
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
tags:
|
11 |
-
|
12 |
-
|
13 |
-
description: >-
|
14 |
-
This metric wrap the official scoring script for version 2 of the Stanford
|
15 |
-
Question
|
16 |
-
|
17 |
-
Answering Dataset (SQuAD).
|
18 |
-
|
19 |
-
|
20 |
-
Stanford Question Answering Dataset (SQuAD) is a reading comprehension
|
21 |
-
dataset, consisting of questions posed by
|
22 |
-
|
23 |
-
crowdworkers on a set of Wikipedia articles, where the answer to every
|
24 |
-
question is a segment of text, or span,
|
25 |
-
|
26 |
-
from the corresponding reading passage, or the question might be unanswerable.
|
27 |
-
|
28 |
-
|
29 |
-
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000
|
30 |
-
unanswerable questions
|
31 |
-
|
32 |
-
written adversarially by crowdworkers to look similar to answerable ones.
|
33 |
-
|
34 |
-
To do well on SQuAD2.0, systems must not only answer questions when possible,
|
35 |
-
but also
|
36 |
-
|
37 |
-
determine when no answer is supported by the paragraph and abstain from
|
38 |
-
answering.
|
39 |
---
|
|
|
40 |
# Metric Card for SQuAD v2
|
41 |
|
42 |
## Metric description
|
|
|
1 |
---
|
2 |
title: SQuAD v2
|
3 |
+
emoji: 🤗
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
sdk: gradio
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
tags:
|
11 |
+
- evaluate
|
12 |
+
- metric
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
+
|
15 |
# Metric Card for SQuAD v2
|
16 |
|
17 |
## Metric description
|