add captions dataset
Browse files- README.md +2 -1
- coco_captions.jsonl.gz +3 -0
README.md
CHANGED
@@ -9,7 +9,7 @@ All files are in a `jsonl.gz` format: Each line contains a JSON-object that repr
|
|
9 |
The JSON objects can come in different formats:
|
10 |
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
|
11 |
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
|
12 |
-
|
13 |
|
14 |
## Available Datasets
|
15 |
|
@@ -23,6 +23,7 @@ We measure the performance for each training dataset by training the [nreimers/M
|
|
23 |
| --- | --- | :---: | :---: | --- |
|
24 |
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
|
25 |
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
|
|
|
26 |
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
|
27 |
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
|
28 |
| [fever_train.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/fever_train.jsonl.gz) | Training data from the FEVER corpus | 139,051 | 52.63 | [FEVER](https://huggingface.co/datasets/fever)
|
|
|
9 |
The JSON objects can come in different formats:
|
10 |
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
|
11 |
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
|
12 |
+
- **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
|
13 |
|
14 |
## Available Datasets
|
15 |
|
|
|
23 |
| --- | --- | :---: | :---: | --- |
|
24 |
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
|
25 |
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
|
26 |
+
| [coco_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/coco_captions.jsonl.gz) | Different captions for the same image | 82,783 | 53.77 | [COCO](https://cocodataset.org/)
|
27 |
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
|
28 |
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
|
29 |
| [fever_train.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/fever_train.jsonl.gz) | Training data from the FEVER corpus | 139,051 | 52.63 | [FEVER](https://huggingface.co/datasets/fever)
|
coco_captions.jsonl.gz
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf0a7a50a7a43f4f010690bf3f7365ef0ce98afd0ea5747d04d00c8f3917a5f8
|
3 |
+
size 6316394
|