espejelomar commited on
Commit
57eee28
1 Parent(s): 2520926

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -14
README.md CHANGED
@@ -4,6 +4,11 @@ language:
4
  - en
5
  paperswithcode_id: embedding-data/sentence-compression
6
  pretty_name: sentence-compression
 
 
 
 
 
7
  ---
8
 
9
  # Dataset Card for "sentence-compression"
@@ -44,24 +49,44 @@ pretty_name: sentence-compression
44
  - **Total amount of disk used:** 14.2 MB
45
 
46
  ### Dataset Summary
47
- Large corpus of uncompressed and compressed sentences from news articles.
48
-
49
  The dataset is provided "AS IS" without any warranty, express or implied.
50
- Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
51
-
52
- The algorithm to collect the data is described here: Overcoming the Lack of Parallel Data in Sentence Compression,
53
- Katja Filippova and Yasemin Altun, Proceedings of the 2013 Conference on Empirical Methods in
54
- Natural Language Processing (EMNLP '13), pp. 1481-1491. [pdf](https://aclanthology.org/D13-1155.pdf)
55
 
56
  Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
57
 
58
- ### Supported Tasks and Leaderboards
59
-
60
- [More Information Needed](https://github.com/google-research-datasets/sentence-compression)
61
-
62
  ### Languages
63
-
64
- [More Information Needed](https://github.com/google-research-datasets/sentence-compression)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  ### Curation Rationale
67
 
@@ -117,6 +142,6 @@ Disclaimer: The team releasing sentence-compression did not upload the dataset t
117
 
118
  ### Contributions
119
 
120
- Thanks to [@katja-f](https://github.com/katja-f), [Google Research Datasets](https://github.com/google-research-datasets), [@dave-orr](https://github.com/dave-orr) for adding this dataset.
121
 
122
 
4
  - en
5
  paperswithcode_id: embedding-data/sentence-compression
6
  pretty_name: sentence-compression
7
+ task_categories:
8
+ - sentence-similarity
9
+ - paraphrase-mining
10
+ task_ids:
11
+ - semantic-similarity-classification
12
  ---
13
 
14
  # Dataset Card for "sentence-compression"
49
  - **Total amount of disk used:** 14.2 MB
50
 
51
  ### Dataset Summary
52
+ Dataset with pairs of equivalent sentences.
 
53
  The dataset is provided "AS IS" without any warranty, express or implied.
54
+ Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset.
 
 
 
 
55
 
56
  Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
57
 
58
+ ### Supported Tasks
59
+ - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
 
 
60
  ### Languages
61
+ - English.
62
+ ## Dataset Structure
63
+ Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
64
+ ```
65
+ {"set": [sentence_1, sentence_2]}
66
+ {"set": [sentence_1, sentence_2]}
67
+ ...
68
+ {"set": [sentence_1, sentence_2]}
69
+ ```
70
+ This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
71
+ ### Usage Example
72
+ Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
73
+ ```python
74
+ from datasets import load_dataset
75
+ dataset = load_dataset("embedding-data/sentence-compression")
76
+ ```
77
+ The dataset is loaded as a `DatasetDict` and has the format:
78
+ ```python
79
+ DatasetDict({
80
+ train: Dataset({
81
+ features: ['set'],
82
+ num_rows: 180000
83
+ })
84
+ })
85
+ ```
86
+ Review an example `i` with:
87
+ ```python
88
+ dataset["train"][i]["set"]
89
+ ```
90
 
91
  ### Curation Rationale
92
 
142
 
143
  ### Contributions
144
 
145
+
146
 
147