shuttie commited on
Commit
082bf7c
0 Parent(s):

initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ *.onnx filter=lfs diff=lfs merge=lfs -text
2
+ *.bin filter=lfs diff=lfs merge=lfs -text
3
+ vocab.txt filter=lfs diff=lfs merge=lfs -text
4
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+
8
+ ---
9
+
10
+ # nixie-suggest-small-v1
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ This model is based on E5-small-v2 model, fine-tuned for typical suggester-like workloads:
15
+ * for a partial and noisy input of the query, it tries to minimize the cosine distance to the correct query
16
+ * 'mil' should be close to 'milk'
17
+ * model also prone to typical typos like letter drops/swaps/duplications. So 'mikl' is still close to 'milk'.
18
+ * the model is asymmetrical (as the original E5), so you need to prepend your prefixes with 'query: ' and full queries with 'passage: '
19
+
20
+ ## Usage (Sentence-Transformers)
21
+
22
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
23
+
24
+ ```
25
+ pip install -U sentence-transformers
26
+ ```
27
+
28
+ Then you can use the model like this:
29
+
30
+ ```python
31
+ from sentence_transformers import SentenceTransformer
32
+ sentences = ["query: mil", "passage: milk"]
33
+
34
+ model = SentenceTransformer('nixiesearch/nixie-suggest-small-v1')
35
+ embeddings = model.encode(sentences)
36
+ print(embeddings)
37
+ ```
38
+
39
+ ## Training dataset
40
+
41
+ The training dataset was syntetically generated from the following corpora:
42
+ * top-100k most frequent English words, from Google N-Gram project: [https://github.com/hackerb9/gwordlist](https://github.com/hackerb9/gwordlist)
43
+ * top-1M 2-grams and 3-grams from [MultiLex](https://analytics.huma-num.fr/popr-ngram/Multi-LEX/index.html#en-section)
44
+
45
+ We did the following permutations to the original 1/2/3-grams:
46
+ * letter swaps: milk-mikl
47
+ * letter drops: milk-ilk
48
+ * qwerty-aware replacements: milk-nilk
49
+ * duplications: milk-miilk
50
+
51
+ The original generation code is available on github: https://github.com/nixiesearch/autocomplete-playground
52
+
53
+ ## Training
54
+ The model was trained with the parameters:
55
+
56
+ **DataLoader**:
57
+
58
+ `torch.utils.data.dataloader.DataLoader` of length 220359 with parameters:
59
+ ```
60
+ {'batch_size': 2048, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
61
+ ```
62
+
63
+ **Loss**:
64
+
65
+ `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
66
+ ```
67
+ {'scale': 20.0, 'similarity_fct': 'cos_sim'}
68
+ ```
69
+
70
+ Parameters of the fit()-Method:
71
+ ```
72
+ {
73
+ "epochs": 1,
74
+ "evaluation_steps": 3000,
75
+ "evaluator": "sentence_transformers.evaluation.RerankingEvaluator.RerankingEvaluator",
76
+ "max_grad_norm": 1,
77
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
78
+ "optimizer_params": {
79
+ "lr": 2e-05
80
+ },
81
+ "scheduler": "WarmupLinear",
82
+ "steps_per_epoch": 220358,
83
+ "warmup_steps": 1000,
84
+ "weight_decay": 0.01
85
+ }
86
+ ```
87
+
88
+
89
+ ## Full Model Architecture
90
+ ```
91
+ SentenceTransformer(
92
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
93
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
94
+ (2): Normalize()
95
+ )
96
+ ```
97
+
98
+ ## Citing & Authors
99
+
100
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/shutty/.cache/torch/sentence_transformers/intfloat_e5-small-v2/",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.31.0",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.31.0",
5
+ "pytorch": "2.0.1+cu117"
6
+ }
7
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
onnx_convert.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModel
2
+ import torch
3
+
4
+ max_seq_length=128
5
+
6
+ model = AutoModel.from_pretrained(".")
7
+ model.eval()
8
+
9
+ inputs = {"input_ids": torch.ones(1, max_seq_length, dtype=torch.int64),
10
+ "attention_mask": torch.ones(1, max_seq_length, dtype=torch.int64),
11
+ "token_type_ids": torch.ones(1, max_seq_length, dtype=torch.int64)}
12
+
13
+ symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
14
+
15
+ torch.onnx.export(model, args=tuple(inputs.values()), f='pytorch_model.onnx', export_params=True,
16
+ input_names=['input_ids', 'attention_mask', 'token_type_ids'], output_names=['last_hidden_state'],
17
+ dynamic_axes={'input_ids': symbolic_names, 'attention_mask': symbolic_names, 'token_type_ids': symbolic_names})
18
+
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da1041d07af472698bb9f233ab3ec54913b73a53ed6eb5f1f287256c5784d6d7
3
+ size 133506729
pytorch_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:315cc60a85802f9a2eb0e2b9d51ff7a971ed4a2f98d9ec9ac6436b9ea9530207
3
+ size 133694736
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91f1def9b9391fdabe028cd3f3fcc4efd34e5d1f08c3bf2de513ebb5911a1854
3
+ size 711649
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 1000000000000000019884624838656,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
vocab.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07eced375cec144d27c900241f3e339478dec958f92fddbc551f295c992038a3
3
+ size 231508