d0rj commited on
Commit
6a8596b
1 Parent(s): fb8e5e8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - ru
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ - recall
10
+ library_name: transformers
11
+ pipeline_tag: sentence-similarity
12
+ tags:
13
+ - mteb
14
+ - retrieval
15
+ - retriever
16
+ - pruned
17
+ - e5
18
+ - sentence-transformers
19
+ - feature-extraction
20
+ - sentence-similarity
21
+ ---
22
+
23
+ # E5-base-en-ru
24
+
25
+ ## Model info
26
+
27
+ This is vocabulary pruned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base).
28
+
29
+ Uses only russian and english tokens.
30
+
31
+ ### Size
32
+
33
+ | | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
34
+ | --- | --- | --- |
35
+ | Model size (MB) | 1060.65 | 504.89 |
36
+ | Params (count) | 278,043,648 | 132,354,048 |
37
+ | Word embeddings dim | 192,001,536 | 46,311,936 |
38
+
39
+ ### Performance
40
+
41
+ Performance on SberQuAD dev benchmark.
42
+
43
+ | Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
44
+ | --- | --- | --- |
45
+ | recall@3 | | |
46
+ | map@3 | | |
47
+ | mrr@3 | | |
48
+ | recall@5 | | |
49
+ | map@5 | | |
50
+ | mrr@5 | | |
51
+ | recall@10 | | |
52
+ | map@10 | | |
53
+ | mrr@10 | | |
54
+
55
+ ## Usage
56
+
57
+ - Use **dot product** distance for retrieval.
58
+
59
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
60
+
61
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
62
+
63
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
64
+
65
+ ### transformers
66
+
67
+ #### Direct usage
68
+
69
+ ```python
70
+ import torch.nn.functional as F
71
+ from torch import Tensor
72
+ from transformers import XLMRobertaTokenizer, XLMRobertaModel
73
+
74
+
75
+ def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
76
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
77
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
78
+
79
+
80
+ input_texts = [
81
+ 'query: How does a corporate website differ from a business card website?',
82
+ 'query: Где был создан первый троллейбус?',
83
+ 'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
84
+ 'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
85
+ ]
86
+
87
+ tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
88
+ model = XLMRobertaModel.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
89
+
90
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
91
+
92
+ outputs = model(**batch_dict)
93
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
94
+
95
+ embeddings = F.normalize(embeddings, p=2, dim=1)
96
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
97
+ print(scores.tolist())
98
+ # [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
99
+ ```
100
+
101
+ #### Pipeline
102
+
103
+ ```python
104
+ from transformers import pipeline
105
+
106
+
107
+ pipe = pipeline('feature-extraction', model='d0rj/e5-base-en-ru')
108
+ embeddings = pipe(input_texts, return_tensors=True)
109
+ embeddings[0].size()
110
+ # torch.Size([1, 17, 1024])
111
+ ```
112
+
113
+ ### sentence-transformers
114
+
115
+ ```python
116
+ from sentence_transformers import SentenceTransformer
117
+
118
+
119
+ sentences = [
120
+ 'query: Что такое круглые тензоры?',
121
+ 'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
122
+ ]
123
+
124
+ model = SentenceTransformer('d0rj/e5-base-en-ru')
125
+ embeddings = model.encode(sentences, convert_to_tensor=True)
126
+ embeddings.size()
127
+ # torch.Size([2, 1024])
128
+ ```