shibing624 commited on
Commit
d6fb8b9
1 Parent(s): 53da5f4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ license: apache-2.0
4
+ tags:
5
+ - text2vec
6
+ - feature-extraction
7
+ - sentence-similarity
8
+ - transformers
9
+ ---
10
+ # shibing624/text2vec
11
+ This is a CoSENT(Cosine Sentence) model: It maps sentences to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
12
+ ## Usage (text2vec)
13
+ Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed:
14
+ ```
15
+ pip install -U text2vec
16
+ ```
17
+ Then you can use the model like this:
18
+ ```python
19
+ from text2vec import SBert
20
+ sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']
21
+
22
+ model = SBert('shibing624/text2vec-base-chinese')
23
+ embeddings = model.encode(sentences)
24
+ print(embeddings)
25
+ ```
26
+ ## Usage (HuggingFace Transformers)
27
+ Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
28
+ ```python
29
+ from transformers import BertTokenizer, BertModel
30
+ import torch
31
+
32
+ # Mean Pooling - Take attention mask into account for correct averaging
33
+ def mean_pooling(model_output, attention_mask):
34
+ token_embeddings = model_output[0] # First element of model_output contains all token embeddings
35
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
36
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
37
+
38
+ # Load model from HuggingFace Hub
39
+ tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese')
40
+ model = BertModel.from_pretrained('shibing624/text2vec-base-chinese')
41
+ sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']
42
+ # Tokenize sentences
43
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
44
+
45
+ # Compute token embeddings
46
+ with torch.no_grad():
47
+ model_output = model(**encoded_input)
48
+ # Perform pooling. In this case, max pooling.
49
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
50
+ print("Sentence embeddings:")
51
+ print(sentence_embeddings)
52
+ ```
53
+ ## Evaluation Results
54
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [text2vec](https://github.com/shibing624/text2vec)
55
+
56
+ ## Full Model Architecture
57
+ ```
58
+ SBert(
59
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
60
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_mean_tokens': True})
61
+ )
62
+ ```
63
+ ## Citing & Authors
64
+ This model was trained by [text2vec/cosent](https://github.com/shibing624/text2vec/cosent).
65
+
66
+ If you find this model helpful, feel free to cite:
67
+ ```bibtex
68
+ @software{text2vec,
69
+ author = {Xu Ming},
70
+ title = {text2vec: A Tool for Text to Vector},
71
+ year = {2022},
72
+ url = {https://github.com/shibing624/text2vec},
73
+ }
74
+ ```