pritamdeka commited on
Commit
db5eae4
1 Parent(s): 6d87c82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -7,9 +7,9 @@ tags:
7
  - transformers
8
  ---
9
 
10
- # {pritamdeka/BioBERT-mnli-snli-scinli-stsb}
11
 
12
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI and STSB datasets for providing robust sentence embeddings.
13
 
14
  <!--- Describe your model here -->
15
 
@@ -27,7 +27,7 @@ Then you can use the model like this:
27
  from sentence_transformers import SentenceTransformer
28
  sentences = ["This is an example sentence", "Each sentence is converted"]
29
 
30
- model = SentenceTransformer('{pritamdeka/BioBERT-mnli-snli-scinli-stsb}')
31
  embeddings = model.encode(sentences)
32
  print(embeddings)
33
  ```
@@ -53,8 +53,8 @@ def mean_pooling(model_output, attention_mask):
53
  sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
  # Load model from HuggingFace Hub
56
- tokenizer = AutoTokenizer.from_pretrained('{pritamdeka/BioBERT-mnli-snli-scinli-stsb}')
57
- model = AutoModel.from_pretrained('{pritamdeka/BioBERT-mnli-snli-scinli-stsb}')
58
 
59
  # Tokenize sentences
60
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
7
  - transformers
8
  ---
9
 
10
+ # pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb
11
 
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings.
13
 
14
  <!--- Describe your model here -->
15
 
 
27
  from sentence_transformers import SentenceTransformer
28
  sentences = ["This is an example sentence", "Each sentence is converted"]
29
 
30
+ model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
31
  embeddings = model.encode(sentences)
32
  print(embeddings)
33
  ```
 
53
  sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
  # Load model from HuggingFace Hub
56
+ tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
57
+ model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb')
58
 
59
  # Tokenize sentences
60
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')