parquet-converter commited on
Commit
1da6d1d
1 Parent(s): aea2595

Update parquet files

Browse files
data/validation-00000-of-00001-cdaa2cc922c308d8.parquet → Gabriel--xsum_swe/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fac9e97d8b9059eef01d0542216f5391f5c06d1a12ccb93ed22a6c853ee486e0
3
- size 17423684
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:103cbd835d4b99a222f7ef9f67022e54bf1788b035dc2cd4e562bd86f22934ab
3
+ size 17862734
data/train-00000-of-00002-aaebfc9d917bde8d.parquet → Gabriel--xsum_swe/parquet-train-00000-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7f2ff67e16000308f5d5a39dc57a2711cf1645652b2e1c2ed3e467b82539e51
3
- size 158331860
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82b3ae036c24905d8ddfc897a9ef6ec3e2294ca8a4630cb1049cf5a5dc513eb9
3
+ size 315754570
data/test-00000-of-00001-158e2161da0c36d7.parquet → Gabriel--xsum_swe/parquet-train-00001-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e00fdeec12af4c97540fd25d6617d3184ca512294ad374f5b70854ac5446b471
3
- size 17749129
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:914700b96422132687d82538adf16293deb30ba241e938b4d60926c3317ca412
3
+ size 3124848
data/train-00001-of-00002-87aa294aadaea11b.parquet → Gabriel--xsum_swe/parquet-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:272ff5a4d1ddd7b09694d68dcd56a6fef525dc7094de0c315304f52f4e439600
3
- size 158936011
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0669d6d4d0390313aff0be5c42a57494911766036396cc44d9dcfce25dd181c2
3
+ size 17510371
README.md DELETED
@@ -1,35 +0,0 @@
1
- ---
2
- language:
3
- - sv
4
- license:
5
- - mit
6
- size_categories:
7
- - 100K<n<1M
8
- source_datasets:
9
- - https://github.com/huggingface/datasets/tree/master/datasets/xsum
10
- task_categories:
11
- - summarization
12
- - text2text-generation
13
- task_ids: []
14
- tags:
15
- - conditional-text-generation
16
- ---
17
-
18
- # Dataset Card for Swedish Xsum Dataset
19
- The Swedish xsum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks.
20
- ## Dataset Summary
21
- Read about the full details at original English version: https://huggingface.co/datasets/xsum
22
-
23
- ### Data Fields
24
- - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
25
- - `document`: a string containing the body of the news article
26
- - `summary`: a string containing the summary of the article as written by the article author
27
-
28
- ### Data Splits
29
- The Swedish xsum dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_.
30
-
31
- | Dataset Split | Number of Instances in Split |
32
- | ------------- | ------------------------------------------- |
33
- | Train | 204,045 |
34
- | Validation | 11,332 |
35
- | Test | 11,334 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"Gabriel--xsum_swe": {"description": "", "citation": "", "homepage": "", "license": "", "features": {"document": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "int64", "id": null, "_type": "Value"}, "__index_level_0__": {"dtype": "int64", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": null, "config_name": null, "version": null, "splits": {"train": {"name": "train", "num_bytes": 505262206, "num_examples": 204016, "dataset_name": "xsum_swe"}, "validation": {"name": "validation", "num_bytes": 27690452, "num_examples": 11327, "dataset_name": "xsum_swe"}, "test": {"name": "test", "num_bytes": 28240377, "num_examples": 11333, "dataset_name": "xsum_swe"}}, "download_checksums": null, "download_size": 352440684, "post_processing_size": null, "dataset_size": 561193035, "size_in_bytes": 913633719}}