julien-c HF staff commited on
Commit
d84dc57
1 Parent(s): cb5c300

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/sentence-transformers/bert-base-nli-mean-tokens/README.md

Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+ license: apache-2.0
6
+ datasets:
7
+ - snli
8
+ - multi_nli
9
+ ---
10
+
11
+ # BERT base model (uncased) for Sentence Embeddings
12
+ This is the `bert-base-nli-mean-tokens` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings.
13
+ The model is described in the paper [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
14
+
15
+ ## Usage (HuggingFace Models Repository)
16
+
17
+ You can use the model directly from the model repository to compute sentence embeddings:
18
+ ```python
19
+ from transformers import AutoTokenizer, AutoModel
20
+ import torch
21
+
22
+
23
+ #Mean Pooling - Take attention mask into account for correct averaging
24
+ def mean_pooling(model_output, attention_mask):
25
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
26
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
27
+ sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
28
+ sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
29
+ return sum_embeddings / sum_mask
30
+
31
+
32
+
33
+ #Sentences we want sentence embeddings for
34
+ sentences = ['This framework generates embeddings for each input sentence',
35
+ 'Sentences are passed as a list of string.',
36
+ 'The quick brown fox jumps over the lazy dog.']
37
+
38
+ #Load AutoModel from huggingface model repository
39
+ tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
40
+ model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
41
+
42
+ #Tokenize sentences
43
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
44
+
45
+ #Compute token embeddings
46
+ with torch.no_grad():
47
+ model_output = model(**encoded_input)
48
+
49
+ #Perform pooling. In this case, mean pooling
50
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
51
+ ```
52
+
53
+ ## Usage (Sentence-Transformers)
54
+ Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
55
+ ```
56
+ pip install -U sentence-transformers
57
+ ```
58
+
59
+ Then you can use the model like this:
60
+ ```python
61
+ from sentence_transformers import SentenceTransformer
62
+ model = SentenceTransformer('bert-base-nli-mean-tokens')
63
+ sentences = ['This framework generates embeddings for each input sentence',
64
+ 'Sentences are passed as a list of string.',
65
+ 'The quick brown fox jumps over the lazy dog.']
66
+ sentence_embeddings = model.encode(sentences)
67
+
68
+ print("Sentence embeddings:")
69
+ print(sentence_embeddings)
70
+ ```
71
+
72
+
73
+ ## Citing & Authors
74
+ If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
75
+ ```
76
+ @inproceedings{reimers-2019-sentence-bert,
77
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
78
+ author = "Reimers, Nils and Gurevych, Iryna",
79
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
80
+ month = "11",
81
+ year = "2019",
82
+ publisher = "Association for Computational Linguistics",
83
+ url = "http://arxiv.org/abs/1908.10084",
84
+ }
85
+ ```