Revankumar commited on
Commit
c836ee6
1 Parent(s): fa3cef4

Upload 10 files

Browse files
README.md CHANGED
@@ -1,27 +1,22 @@
1
  ---
2
- license: mit
3
- ---
4
-
5
- ---
6
  tags:
7
  - sentence-transformers
8
  - feature-extraction
9
- ---
10
- # Name of Model
11
 
12
- <!--- Describe your model here -->
13
-
14
- ## Model Description
15
- The model consists of the following layers:
16
 
17
- (0) Base Transformer Type: BAAI/bge-small-en-v1.5
18
 
19
- (1) mean Pooling
20
 
 
21
 
22
  ## Usage (Sentence-Transformers)
23
 
24
- Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
25
 
26
  ```
27
  pip install -U sentence-transformers
@@ -31,51 +26,67 @@ Then you can use the model like this:
31
 
32
  ```python
33
  from sentence_transformers import SentenceTransformer
34
- sentences = ["This is an example sentence"]
35
- model = SentenceTransformer('model_name')
 
36
  embeddings = model.encode(sentences)
37
  print(embeddings)
38
  ```
39
 
40
 
41
- ## Usage (HuggingFace Transformers)
42
 
43
- ```python
44
- from transformers import AutoTokenizer, AutoModel
45
- import torch
46
- #Mean Pooling - Take attention mask into account for correct averaging
47
- def mean_pooling(model_output, attention_mask):
48
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
49
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
50
- sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
51
- sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
52
- return sum_embeddings / sum_mask
53
- # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence']
55
- # Load model from HuggingFace Hub
56
- tokenizer = AutoTokenizer.from_pretrained('model_name')
57
- model = AutoModel.from_pretrained('model_name')
58
- # Tokenize sentences
59
- encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
60
- # Compute token embeddings
61
- with torch.no_grad():
62
- model_output = model(**encoded_input)
63
- # Perform pooling. In this case, max pooling.
64
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
65
- print("Sentence embeddings:")
66
- print(sentence_embeddings)
67
- ```
68
 
 
69
 
70
 
71
- ## Training Procedure
 
72
 
73
- <!--- Describe how your model was trained -->
74
 
75
- ## Evaluation Results
 
 
 
76
 
77
- <!--- Describe how your model was evaluated -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Citing & Authors
80
 
81
- <!--- Describe where people can find more information -->
 
1
  ---
2
+ library_name: sentence-transformers
3
+ pipeline_tag: sentence-similarity
 
 
4
  tags:
5
  - sentence-transformers
6
  - feature-extraction
7
+ - sentence-similarity
 
8
 
9
+ ---
 
 
 
10
 
11
+ # {MODEL_NAME}
12
 
13
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
15
+ <!--- Describe your model here -->
16
 
17
  ## Usage (Sentence-Transformers)
18
 
19
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
 
21
  ```
22
  pip install -U sentence-transformers
 
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
+ sentences = ["This is an example sentence", "Each sentence is converted"]
30
+
31
+ model = SentenceTransformer('{MODEL_NAME}')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
35
 
36
 
 
37
 
38
+ ## Evaluation Results
39
+
40
+ <!--- Describe how your model was evaluated -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
43
 
44
 
45
+ ## Training
46
+ The model was trained with the parameters:
47
 
48
+ **DataLoader**:
49
 
50
+ `torch.utils.data.dataloader.DataLoader` of length 109 with parameters:
51
+ ```
52
+ {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
53
+ ```
54
 
55
+ **Loss**:
56
+
57
+ `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
58
+ ```
59
+ {'scale': 20.0, 'similarity_fct': 'cos_sim'}
60
+ ```
61
+
62
+ Parameters of the fit()-Method:
63
+ ```
64
+ {
65
+ "epochs": 2,
66
+ "evaluation_steps": 50,
67
+ "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
68
+ "max_grad_norm": 1,
69
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
70
+ "optimizer_params": {
71
+ "lr": 2e-05
72
+ },
73
+ "scheduler": "WarmupLinear",
74
+ "steps_per_epoch": null,
75
+ "warmup_steps": 21,
76
+ "weight_decay": 0.01
77
+ }
78
+ ```
79
+
80
+
81
+ ## Full Model Architecture
82
+ ```
83
+ SentenceTransformer(
84
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
85
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
86
+ (2): Normalize()
87
+ )
88
+ ```
89
 
90
  ## Citing & Authors
91
 
92
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.35.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "1.13.0+cu117"
6
+ }
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:915f3d8c284d86d0900367a9db2f3c12fe44c4fa5d0f4e51a46dc400dce0dd0e
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff