Ananthu357 commited on
Commit
5f2a079
1 Parent(s): 79ffec4

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-MiniLM-L6-v2
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ - generated_from_trainer
12
+ - dataset_size:458
13
+ - loss:CosineSimilarityLoss
14
+ widget:
15
+ - source_sentence: What does the document say about GST ?
16
+ sentences:
17
+ - If any ambiguity arises as to the meaning and intent of any portion of the Specifications
18
+ and Drawings or as to execution or quality of any work or material, or as to the
19
+ measurements of the works the decision of the Engineer thereon shall be final
20
+ subject to the appeal
21
+ - For tenders costing more than Rs 20 crore wherein eligibility criteria includes
22
+ bid capacity also, the tenderer will be qualified only if its available bid capacity
23
+ is equal to or more than the total bid value of the present tender. The available
24
+ bid capacity shall be calculated.
25
+ - Tenderers will examine the various provisions of The Central Goods and Services
26
+ Tax Act, 2017(CGST)/ Integrated Goods and Services Tax Act, 2017(IGST)/ Union
27
+ Territory Goods and Services Tax Act, 2017(UTGST)/
28
+ - source_sentence: What is the deadline to submit the proposed project schedule?
29
+ sentences:
30
+ - The Contractor who has been awarded the work shall as soon as possible but not
31
+ later than 30 days after the date of receipt of the acceptance letter
32
+ -         Special Conditions can modify the Standard General Conditions.
33
+ - Limited Tenders shall mean tenders invited from all or some contractors on the
34
+ approved or select list of contractors with the Railway
35
+ - source_sentence: These Regulations for Tenders and Contracts shall be read in conjunction
36
+ with the Standard General Conditions of Contract which are referred to herein
37
+ and shall be subject to modifications additions or suppression by Special Conditions
38
+ of Contract and/or Special Specifications, if any, annexed to the Tender Forms.
39
+ sentences:
40
+ - unless the Contractor has made a claim in writing in respect thereof before the
41
+ issue of the Maintenance Certificate under this clause.
42
+ - There shall be no modification expected.
43
+ - Indemnification clause
44
+ - source_sentence: No claim certificate
45
+ sentences:
46
+ - Subcontracting will in no way relieve the Contractor to execute the work as per
47
+ terms of the contract.
48
+ - Final Supplementary Agreement
49
+ - Client can transfer the liability to the contractor
50
+ - source_sentence: What is the deadline to submit the proposed project schedule?
51
+ sentences:
52
+ -         The Contractor shall at his own expense provide with sheds, storehouses
53
+ and yards in such situations and in such numbers
54
+ - This clause defines the Contractor's responsibility for subcontractor performance.
55
+ - Any item of work carried out by the Contractor on the instructions of the Engineer
56
+ which is not included in the accepted Schedules of Rates shall be executed at
57
+ the rates set forth in the Schedule of Rates of Railway.
58
+ ---
59
+
60
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
61
+
62
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
63
+
64
+ ## Model Details
65
+
66
+ ### Model Description
67
+ - **Model Type:** Sentence Transformer
68
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
69
+ - **Maximum Sequence Length:** 256 tokens
70
+ - **Output Dimensionality:** 384 tokens
71
+ - **Similarity Function:** Cosine Similarity
72
+ <!-- - **Training Dataset:** Unknown -->
73
+ <!-- - **Language:** Unknown -->
74
+ <!-- - **License:** Unknown -->
75
+
76
+ ### Model Sources
77
+
78
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
79
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
80
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
81
+
82
+ ### Full Model Architecture
83
+
84
+ ```
85
+ SentenceTransformer(
86
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
87
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
88
+ (2): Normalize()
89
+ )
90
+ ```
91
+
92
+ ## Usage
93
+
94
+ ### Direct Usage (Sentence Transformers)
95
+
96
+ First install the Sentence Transformers library:
97
+
98
+ ```bash
99
+ pip install -U sentence-transformers
100
+ ```
101
+
102
+ Then you can load this model and run inference.
103
+ ```python
104
+ from sentence_transformers import SentenceTransformer
105
+
106
+ # Download from the 🤗 Hub
107
+ model = SentenceTransformer("Ananthu357/Ananthus-Transformers-for-contracts")
108
+ # Run inference
109
+ sentences = [
110
+ 'What is the deadline to submit the proposed project schedule?',
111
+ 'Any item of work carried out by the Contractor on the instructions of the Engineer which is not included in the accepted Schedules of Rates shall be executed at the rates set forth in the Schedule of Rates of Railway.',
112
+ '\xa0 \xa0 \xa0 \xa0 The Contractor shall at his own expense provide with sheds, storehouses and yards in such situations and in such numbers',
113
+ ]
114
+ embeddings = model.encode(sentences)
115
+ print(embeddings.shape)
116
+ # [3, 384]
117
+
118
+ # Get the similarity scores for the embeddings
119
+ similarities = model.similarity(embeddings, embeddings)
120
+ print(similarities.shape)
121
+ # [3, 3]
122
+ ```
123
+
124
+ <!--
125
+ ### Direct Usage (Transformers)
126
+
127
+ <details><summary>Click to see the direct usage in Transformers</summary>
128
+
129
+ </details>
130
+ -->
131
+
132
+ <!--
133
+ ### Downstream Usage (Sentence Transformers)
134
+
135
+ You can finetune this model on your own dataset.
136
+
137
+ <details><summary>Click to expand</summary>
138
+
139
+ </details>
140
+ -->
141
+
142
+ <!--
143
+ ### Out-of-Scope Use
144
+
145
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
146
+ -->
147
+
148
+ <!--
149
+ ## Bias, Risks and Limitations
150
+
151
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
152
+ -->
153
+
154
+ <!--
155
+ ### Recommendations
156
+
157
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
158
+ -->
159
+
160
+ ## Training Details
161
+
162
+ ### Training Hyperparameters
163
+ #### Non-Default Hyperparameters
164
+
165
+ - `eval_strategy`: steps
166
+ - `per_device_train_batch_size`: 16
167
+ - `per_device_eval_batch_size`: 16
168
+ - `num_train_epochs`: 25
169
+ - `warmup_ratio`: 0.1
170
+ - `fp16`: True
171
+ - `batch_sampler`: no_duplicates
172
+
173
+ #### All Hyperparameters
174
+ <details><summary>Click to expand</summary>
175
+
176
+ - `overwrite_output_dir`: False
177
+ - `do_predict`: False
178
+ - `eval_strategy`: steps
179
+ - `prediction_loss_only`: True
180
+ - `per_device_train_batch_size`: 16
181
+ - `per_device_eval_batch_size`: 16
182
+ - `per_gpu_train_batch_size`: None
183
+ - `per_gpu_eval_batch_size`: None
184
+ - `gradient_accumulation_steps`: 1
185
+ - `eval_accumulation_steps`: None
186
+ - `learning_rate`: 5e-05
187
+ - `weight_decay`: 0.0
188
+ - `adam_beta1`: 0.9
189
+ - `adam_beta2`: 0.999
190
+ - `adam_epsilon`: 1e-08
191
+ - `max_grad_norm`: 1.0
192
+ - `num_train_epochs`: 25
193
+ - `max_steps`: -1
194
+ - `lr_scheduler_type`: linear
195
+ - `lr_scheduler_kwargs`: {}
196
+ - `warmup_ratio`: 0.1
197
+ - `warmup_steps`: 0
198
+ - `log_level`: passive
199
+ - `log_level_replica`: warning
200
+ - `log_on_each_node`: True
201
+ - `logging_nan_inf_filter`: True
202
+ - `save_safetensors`: True
203
+ - `save_on_each_node`: False
204
+ - `save_only_model`: False
205
+ - `restore_callback_states_from_checkpoint`: False
206
+ - `no_cuda`: False
207
+ - `use_cpu`: False
208
+ - `use_mps_device`: False
209
+ - `seed`: 42
210
+ - `data_seed`: None
211
+ - `jit_mode_eval`: False
212
+ - `use_ipex`: False
213
+ - `bf16`: False
214
+ - `fp16`: True
215
+ - `fp16_opt_level`: O1
216
+ - `half_precision_backend`: auto
217
+ - `bf16_full_eval`: False
218
+ - `fp16_full_eval`: False
219
+ - `tf32`: None
220
+ - `local_rank`: 0
221
+ - `ddp_backend`: None
222
+ - `tpu_num_cores`: None
223
+ - `tpu_metrics_debug`: False
224
+ - `debug`: []
225
+ - `dataloader_drop_last`: False
226
+ - `dataloader_num_workers`: 0
227
+ - `dataloader_prefetch_factor`: None
228
+ - `past_index`: -1
229
+ - `disable_tqdm`: False
230
+ - `remove_unused_columns`: True
231
+ - `label_names`: None
232
+ - `load_best_model_at_end`: False
233
+ - `ignore_data_skip`: False
234
+ - `fsdp`: []
235
+ - `fsdp_min_num_params`: 0
236
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
237
+ - `fsdp_transformer_layer_cls_to_wrap`: None
238
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
239
+ - `deepspeed`: None
240
+ - `label_smoothing_factor`: 0.0
241
+ - `optim`: adamw_torch
242
+ - `optim_args`: None
243
+ - `adafactor`: False
244
+ - `group_by_length`: False
245
+ - `length_column_name`: length
246
+ - `ddp_find_unused_parameters`: None
247
+ - `ddp_bucket_cap_mb`: None
248
+ - `ddp_broadcast_buffers`: False
249
+ - `dataloader_pin_memory`: True
250
+ - `dataloader_persistent_workers`: False
251
+ - `skip_memory_metrics`: True
252
+ - `use_legacy_prediction_loop`: False
253
+ - `push_to_hub`: False
254
+ - `resume_from_checkpoint`: None
255
+ - `hub_model_id`: None
256
+ - `hub_strategy`: every_save
257
+ - `hub_private_repo`: False
258
+ - `hub_always_push`: False
259
+ - `gradient_checkpointing`: False
260
+ - `gradient_checkpointing_kwargs`: None
261
+ - `include_inputs_for_metrics`: False
262
+ - `eval_do_concat_batches`: True
263
+ - `fp16_backend`: auto
264
+ - `push_to_hub_model_id`: None
265
+ - `push_to_hub_organization`: None
266
+ - `mp_parameters`:
267
+ - `auto_find_batch_size`: False
268
+ - `full_determinism`: False
269
+ - `torchdynamo`: None
270
+ - `ray_scope`: last
271
+ - `ddp_timeout`: 1800
272
+ - `torch_compile`: False
273
+ - `torch_compile_backend`: None
274
+ - `torch_compile_mode`: None
275
+ - `dispatch_batches`: None
276
+ - `split_batches`: None
277
+ - `include_tokens_per_second`: False
278
+ - `include_num_input_tokens_seen`: False
279
+ - `neftune_noise_alpha`: None
280
+ - `optim_target_modules`: None
281
+ - `batch_eval_metrics`: False
282
+ - `eval_on_start`: False
283
+ - `batch_sampler`: no_duplicates
284
+ - `multi_dataset_batch_sampler`: proportional
285
+
286
+ </details>
287
+
288
+ ### Training Logs
289
+ | Epoch | Step | Training Loss | loss |
290
+ |:-------:|:----:|:-------------:|:------:|
291
+ | 3.3448 | 100 | 0.1154 | 0.0756 |
292
+ | 6.6897 | 200 | 0.0204 | 0.0675 |
293
+ | 10.0345 | 300 | 0.0123 | 0.0767 |
294
+ | 13.3448 | 400 | 0.0048 | 0.0650 |
295
+ | 16.6897 | 500 | 0.0031 | 0.0633 |
296
+ | 20.0345 | 600 | 0.0026 | 0.0647 |
297
+ | 23.3448 | 700 | 0.0025 | 0.0649 |
298
+
299
+
300
+ ### Framework Versions
301
+ - Python: 3.10.12
302
+ - Sentence Transformers: 3.0.1
303
+ - Transformers: 4.42.4
304
+ - PyTorch: 2.3.1+cu121
305
+ - Accelerate: 0.32.1
306
+ - Datasets: 2.20.0
307
+ - Tokenizers: 0.19.1
308
+
309
+ ## Citation
310
+
311
+ ### BibTeX
312
+
313
+ #### Sentence Transformers
314
+ ```bibtex
315
+ @inproceedings{reimers-2019-sentence-bert,
316
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
317
+ author = "Reimers, Nils and Gurevych, Iryna",
318
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
319
+ month = "11",
320
+ year = "2019",
321
+ publisher = "Association for Computational Linguistics",
322
+ url = "https://arxiv.org/abs/1908.10084",
323
+ }
324
+ ```
325
+
326
+ <!--
327
+ ## Glossary
328
+
329
+ *Clearly define terms in order to be accessible across audiences.*
330
+ -->
331
+
332
+ <!--
333
+ ## Model Card Authors
334
+
335
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
336
+ -->
337
+
338
+ <!--
339
+ ## Model Card Contact
340
+
341
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
342
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.42.4",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.4",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70bc962e5865243aa272d98f1e07d347784408a74ddd803cb2f098851b091dac
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 128,
50
+ "model_max_length": 256,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff