Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- sv
|
4 |
+
license:
|
5 |
+
- mit
|
6 |
+
size_categories:
|
7 |
+
- 100K<n<1M
|
8 |
+
source_datasets:
|
9 |
+
- https://github.com/huggingface/datasets/tree/master/datasets/xsum
|
10 |
+
task_categories:
|
11 |
+
- conditional-text-generation
|
12 |
+
task_ids:
|
13 |
+
- summarization
|
14 |
---
|
15 |
+
|
16 |
+
# Dataset Card for Swedish Xsum Dataset
|
17 |
+
The Swedish xsum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks.
|
18 |
+
## Dataset Summary
|
19 |
+
Read about the full details at original English version: https://huggingface.co/datasets/xsum
|
20 |
+
|
21 |
+
### Data Fields
|
22 |
+
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
|
23 |
+
- `document`: a string containing the body of the news article
|
24 |
+
- `summary`: a string containing the summary of the article as written by the article author
|
25 |
+
|
26 |
+
### Data Splits
|
27 |
+
The Swedish xsum dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_.
|
28 |
+
|
29 |
+
| Dataset Split | Number of Instances in Split |
|
30 |
+
| ------------- | ------------------------------------------- |
|
31 |
+
| Train | 204,045 |
|
32 |
+
| Validation | 11,332 |
|
33 |
+
| Test | 11,334 |
|