tomaarsen HF staff commited on
Commit
ace7430
1 Parent(s): 094614d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -40
README.md CHANGED
@@ -1,40 +1,43 @@
1
- # MS MARCO Passages Hard Negatives
2
-
3
- [MS MARCO](https://microsoft.github.io/msmarco/) is a large scale information retrieval corpus that was created based on real user search queries using Bing search engine.
4
-
5
- This dataset repository contains files that are helpful to train bi-encoder models e.g. using [sentence-transformers](https://www.sbert.net).
6
-
7
- ## Training Code
8
- You can find here an example how these files can be used to train bi-encoders: [SBERT.net - MS MARCO - MarginMSE](https://www.sbert.net/examples/training/ms_marco/README.html#marginmse)
9
-
10
- ## cross-encoder-ms-marco-MiniLM-L-6-v2-scores.pkl.gz
11
-
12
- This is a pickled dictionary in the format: `scores[qid][pid] -> cross_encoder_score`
13
-
14
- It contains 160 million cross-encoder scores for (query, paragraph) pairs using the [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) model.
15
-
16
- ## msmarco-hard-negatives.jsonl.gz
17
- This is a jsonl file: Each line is a JSON object. It has the following format:
18
- ```
19
- {"qid": 867436, "pos": [5238393], "neg": {"bm25": [...], ...}}
20
- ```
21
-
22
- `qid` is the query-ID from MS MARCO, `pos` is a list with paragraph IDs for positive passages. `neg` is a dictionary where we mined hard negatives using different (mainly dense retrieval) systems.
23
-
24
- It contains hard negatives mined from BM25 (using ElasticSearch) and the following dense models:
25
- ```
26
- msmarco-distilbert-base-tas-b
27
- msmarco-distilbert-base-v3
28
- msmarco-MiniLM-L-6-v3
29
- distilbert-margin_mse-cls-dot-v2
30
- distilbert-margin_mse-cls-dot-v1
31
- distilbert-margin_mse-mean-dot-v1
32
- mpnet-margin_mse-mean-v1
33
- co-condenser-margin_mse-cls-v1
34
- distilbert-margin_mse-mnrl-mean-v1
35
- distilbert-margin_mse-sym_mnrl-mean-v1
36
- distilbert-margin_mse-sym_mnrl-mean-v2
37
- co-condenser-margin_mse-sym_mnrl-mean-v1
38
- ```
39
-
40
- From each system, 50 most similar paragraphs were mined for a given query.
 
 
 
 
1
+ # MS MARCO Passages Hard Negatives
2
+
3
+ > [!NOTE]
4
+ > This repository contains raw datasets, all of which have also been formatted for easy training in the [MS MARCO Mined Triplets](https://huggingface.co/collections/sentence-transformers/ms-marco-mined-triplets-6644d6f1ff58c5103fe65f23) collection. We recommend looking there first.
5
+
6
+ [MS MARCO](https://microsoft.github.io/msmarco/) is a large scale information retrieval corpus that was created based on real user search queries using Bing search engine.
7
+
8
+ This dataset repository contains files that are helpful to train bi-encoder models e.g. using [sentence-transformers](https://www.sbert.net).
9
+
10
+ ## Training Code
11
+ You can find here an example how these files can be used to train bi-encoders: [SBERT.net - MS MARCO - MarginMSE](https://www.sbert.net/examples/training/ms_marco/README.html#marginmse)
12
+
13
+ ## cross-encoder-ms-marco-MiniLM-L-6-v2-scores.pkl.gz
14
+
15
+ This is a pickled dictionary in the format: `scores[qid][pid] -> cross_encoder_score`
16
+
17
+ It contains 160 million cross-encoder scores for (query, paragraph) pairs using the [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) model.
18
+
19
+ ## msmarco-hard-negatives.jsonl.gz
20
+ This is a jsonl file: Each line is a JSON object. It has the following format:
21
+ ```
22
+ {"qid": 867436, "pos": [5238393], "neg": {"bm25": [...], ...}}
23
+ ```
24
+
25
+ `qid` is the query-ID from MS MARCO, `pos` is a list with paragraph IDs for positive passages. `neg` is a dictionary where we mined hard negatives using different (mainly dense retrieval) systems.
26
+
27
+ It contains hard negatives mined from BM25 (using ElasticSearch) and the following dense models:
28
+ ```
29
+ msmarco-distilbert-base-tas-b
30
+ msmarco-distilbert-base-v3
31
+ msmarco-MiniLM-L-6-v3
32
+ distilbert-margin_mse-cls-dot-v2
33
+ distilbert-margin_mse-cls-dot-v1
34
+ distilbert-margin_mse-mean-dot-v1
35
+ mpnet-margin_mse-mean-v1
36
+ co-condenser-margin_mse-cls-v1
37
+ distilbert-margin_mse-mnrl-mean-v1
38
+ distilbert-margin_mse-sym_mnrl-mean-v1
39
+ distilbert-margin_mse-sym_mnrl-mean-v2
40
+ co-condenser-margin_mse-sym_mnrl-mean-v1
41
+ ```
42
+
43
+ From each system, 50 most similar paragraphs were mined for a given query.