Pclanglais commited on
Commit
5750114
1 Parent(s): 267a024

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ chroma_database/chroma.sqlite3 filter=lfs diff=lfs merge=lfs -text
37
+ e5-multilingual/tokenizer.json filter=lfs diff=lfs merge=lfs -text
chroma_database/chroma.sqlite3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6528887161dc821f61f5574f83e7ee26d4f84acba48f1703bfe95796e9b59a58
3
+ size 1130844160
chroma_database/dc8442df-66a7-4647-8745-0a05b1e6aa6f/data_level0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a475e6d29f422e02c2223420cc1db9c92175e30e625fc80cbce3de58edbdca5
3
+ size 115632000
chroma_database/dc8442df-66a7-4647-8745-0a05b1e6aa6f/header.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e098c2df1c50fa43ea7ee073ad65d99c9bd7a1b4e472c03b848cfeefcb773fc7
3
+ size 100
chroma_database/dc8442df-66a7-4647-8745-0a05b1e6aa6f/index_metadata.pickle ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c88028ca7386609a5f424e20f2e1f10a6d63430ab4e671643b69dee4bfd437
3
+ size 959204
chroma_database/dc8442df-66a7-4647-8745-0a05b1e6aa6f/length.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9df577d07849b524b98e19ec2b788f99907503a9097c012763db7ea4eebcb1de
3
+ size 144000
chroma_database/dc8442df-66a7-4647-8745-0a05b1e6aa6f/link_lists.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f571557ce57ab6b389515a74b8b09e0f7edcb154f23bbbee5c51749f5db1b1b
3
+ size 301352
e5-multilingual/1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
e5-multilingual/README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+
8
+ ---
9
+
10
+ # Multilingual-E5-base (sentence-transformers)
11
+
12
+ This is a the sentence-transformers version of the [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ <!--- Describe your model here -->
15
+
16
+ ## Usage (Sentence-Transformers)
17
+
18
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
+
20
+ ```
21
+ pip install -U sentence-transformers
22
+ ```
23
+
24
+ Then you can use the model like this:
25
+
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ # Each input text should start with "query: " or "passage: ", even for non-English texts.
29
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
30
+ sentences = ['query: how much protein should a female eat',
31
+ 'query: 南瓜的家常做法',
32
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
33
+ "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
34
+
35
+
36
+ model = SentenceTransformer('embaas/sentence-transformers-multilingual-e5-base')
37
+ embeddings = model.encode(sentences)
38
+ print(embeddings)
39
+ ```
40
+
41
+ ## Usage (Huggingface)
42
+
43
+ ```python
44
+ import torch.nn.functional as F
45
+
46
+ from torch import Tensor
47
+ from transformers import AutoTokenizer, AutoModel
48
+
49
+
50
+ def average_pool(last_hidden_states: Tensor,
51
+ attention_mask: Tensor) -> Tensor:
52
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
53
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
54
+
55
+
56
+ # Each input text should start with "query: " or "passage: ", even for non-English texts.
57
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
58
+ input_texts = ['query: how much protein should a female eat',
59
+ 'query: 南瓜的家常做法',
60
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
61
+ "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
64
+ model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
65
+
66
+ # Tokenize the input texts
67
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
68
+
69
+ outputs = model(**batch_dict)
70
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
71
+
72
+ # (Optionally) normalize embeddings
73
+ embeddings = F.normalize(embeddings, p=2, dim=1)
74
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
75
+ print(scores.tolist())
76
+ ```
77
+
78
+ ## Using with API
79
+
80
+ You can use the [embaas API](https://embaas.io) to encode your input. Get your free API key from [embaas.io](https://embaas.io)
81
+
82
+ ```python
83
+ import requests
84
+
85
+ url = "https://api.embaas.io/v1/embeddings/"
86
+
87
+ headers = {
88
+ "Content-Type": "application/json",
89
+ "Authorization": "Bearer ${YOUR_API_KEY}"
90
+ }
91
+
92
+ data = {
93
+ "texts": ["This is an example sentence.", "Here is another sentence."],
94
+ "instruction": "query"
95
+ "model": "multilingual-e5-base"
96
+ }
97
+
98
+ response = requests.post(url, json=data, headers=headers)
99
+ ```
100
+
101
+ ## Evaluation Results
102
+
103
+ <!--- Describe how your model was evaluated -->
104
+
105
+ You can find the MTEB results [here](https://huggingface.co/spaces/mteb/leaderboard).
106
+
107
+
108
+
109
+ ## Full Model Architecture
110
+ ```
111
+ SentenceTransformer(
112
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
113
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
114
+ (2): Normalize()
115
+ )
116
+ ```
117
+
118
+ ## Citing & Authors
119
+
120
+ <!--- Describe where people can find more information -->
e5-multilingual/config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "intfloat/multilingual-e5-base",
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.28.1",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 250002
28
+ }
e5-multilingual/config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "2.0.1+cu117"
6
+ }
7
+ }
e5-multilingual/modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
e5-multilingual/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f061cb7641880f52895cbacab7c4ab39b0844e2e6b73794f2798de460d9fa418
3
+ size 1112242989
e5-multilingual/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
e5-multilingual/sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
e5-multilingual/special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
e5-multilingual/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62c24cdc13d4c9952d63718d6c9fa4c287974249e16b7ade6d5a85e7bbb75626
3
+ size 17082660
e5-multilingual/tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "<s>",
5
+ "eos_token": "</s>",
6
+ "mask_token": {
7
+ "__type": "AddedToken",
8
+ "content": "<mask>",
9
+ "lstrip": true,
10
+ "normalized": true,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "model_max_length": 512,
15
+ "pad_token": "<pad>",
16
+ "sep_token": "</s>",
17
+ "tokenizer_class": "XLMRobertaTokenizer",
18
+ "unk_token": "<unk>"
19
+ }