nreimers commited on
Commit
28f3bfb
1 Parent(s): dce026d
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - transformers
8
  ---
9
 
10
- # msmarco-bert-base-dot-v4
11
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
12
 
13
 
@@ -26,7 +26,7 @@ query = "How many people live in London?"
26
  docs = ["Around 9 Million people live in London", "London is known for its financial district"]
27
 
28
  #Load the model
29
- model = SentenceTransformer('sentence-transformers/msmarco-bert-base-dot-v4')
30
 
31
  #Encode query and documents
32
  query_emb = model.encode(query)
@@ -81,8 +81,8 @@ query = "How many people live in London?"
81
  docs = ["Around 9 Million people live in London", "London is known for its financial district"]
82
 
83
  # Load model from HuggingFace Hub
84
- tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v4")
85
- model = AutoModel.from_pretrained("sentence-transformers/msmarco-bert-base-dot-v4")
86
 
87
  #Encode query and docs
88
  query_emb = encode(query)
@@ -119,7 +119,7 @@ In the following some technical details how this model must be used:
119
 
120
  <!--- Describe how your model was evaluated -->
121
 
122
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-bert-base-dot-v4)
123
 
124
 
125
  ## Training
 
7
  - transformers
8
  ---
9
 
10
+ # msmarco-distilbert-base-dot-v4
11
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
12
 
13
 
 
26
  docs = ["Around 9 Million people live in London", "London is known for its financial district"]
27
 
28
  #Load the model
29
+ model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-dot-v4')
30
 
31
  #Encode query and documents
32
  query_emb = model.encode(query)
 
81
  docs = ["Around 9 Million people live in London", "London is known for its financial district"]
82
 
83
  # Load model from HuggingFace Hub
84
+ tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-base-dot-v4")
85
+ model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-base-dot-v4")
86
 
87
  #Encode query and docs
88
  query_emb = encode(query)
 
119
 
120
  <!--- Describe how your model was evaluated -->
121
 
122
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-distilbert-base-dot-v4)
123
 
124
 
125
  ## Training