MugheesAwan11 commited on
Commit
89a30b8
1 Parent(s): 795a1b7

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,724 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language:
5
+ - en
6
+ library_name: sentence-transformers
7
+ license: apache-2.0
8
+ metrics:
9
+ - cosine_accuracy@1
10
+ - cosine_accuracy@3
11
+ - cosine_accuracy@5
12
+ - cosine_accuracy@10
13
+ - cosine_precision@1
14
+ - cosine_precision@3
15
+ - cosine_precision@5
16
+ - cosine_precision@10
17
+ - cosine_recall@1
18
+ - cosine_recall@3
19
+ - cosine_recall@5
20
+ - cosine_recall@10
21
+ - cosine_ndcg@10
22
+ - cosine_mrr@10
23
+ - cosine_map@100
24
+ pipeline_tag: sentence-similarity
25
+ tags:
26
+ - sentence-transformers
27
+ - sentence-similarity
28
+ - feature-extraction
29
+ - generated_from_trainer
30
+ - dataset_size:494
31
+ - loss:MatryoshkaLoss
32
+ - loss:MultipleNegativesRankingLoss
33
+ widget:
34
+ - source_sentence: 'Program Join our Partner Program Contact Us Contact us to learn
35
+ more or schedule a demo News Coverage Read about Securiti in the news Press Releases
36
+ Find our latest press releases Careers Join the talented Securiti team Knowledge
37
+ Center » Data Privacy Automation # New Zealand''s Privacy Act of 2020 By Securiti
38
+ Research Team Published March 7, 2022 / Updated August 11, 2023 New Zealand was
39
+ one of the first countries that enacted a law specifically dedicated to its residents''
40
+ right to privacy with its Privacy Act of 1993. Whilst the entire definition of
41
+ what "privacy" means has undergone a radical shift since then New Zealand’s principles
42
+ based legislation has remained relatively fit for purpose. Even with the advent
43
+ of social media and the internet adding an entirely new paradigm to that topic.
44
+ In recognition of the evolution of privacy, New Zealand updated its'
45
+ sentences:
46
+ - Where can I find Securiti's latest press releases?
47
+ - What are the requirements for data transfer under Spain's data protection law,
48
+ including certifications and information for data subjects?
49
+ - What is the term for the right to delete personal data upon request, also known
50
+ as 'the right to be forgotten', and what are the other data protection rights
51
+ under GDPR?
52
+ - source_sentence: 'that the third party: has appropriate policies and processes in
53
+ place; has trained its staff to ensure information is appropriately safeguarded
54
+ at all times; has adequate security measures in place. Simultaneously, the Cross-border
55
+ Guidelines also specify that organizations must provide notice to customers that:
56
+ their personal information may be sent to another jurisdiction for processing;
57
+ while the information is in the other jurisdiction, it may be accessed by the
58
+ courts, law enforcement, and national security authorities. ## 10\. Data Subject
59
+ Rights PIPEDA bestows the following rights to data subjects: Right to access Right
60
+ to accuracy and completeness Right to withdraw consent and submit complaints ##
61
+ 11\. Penalties for PIPEDA Non-Compliance PIPEDA imposes administrative penalties
62
+ for non-compliance, where the amount may vary depending upon the severity and
63
+ the kind of violation. According to PIPEDA, : organizations must keep personal
64
+ information accurate. 7. **Safeguards** : organizations must protect personal
65
+ information against loss or theft. 8. **Openness** : privacy policy and practices
66
+ must be understandable and easily available. 9. **Individual access** : data subjects
67
+ have a right to access the personal information an organization holds about them.
68
+ 10. **Resource** : organizations must develop accessible complaint procedures.
69
+ ## 3\. Obligations for the Data Controller and Data Processor PIPEDA does not
70
+ differentiate between data controllers and data processors and provides a similar
71
+ set of responsibilities for both controllers and processors. PIPEDA demands all
72
+ organizations appoint individuals who will be accountable for ensuring streamlined
73
+ compliance of an organization’s data activities in accordance with the provisions
74
+ of PIPEDA. ## 4\. Consent Requirements In many circumstances, PIPEDA requires
75
+ organizations to obtain the data subject’s consent to use, disclose, and retain
76
+ any personal information.'
77
+ sentences:
78
+ - What are the key provisions of South Korea's data privacy law?
79
+ - What are the circumstances in which the data subject must be notified about the
80
+ collection of personal data?
81
+ - How does PIPEDA ensure staff's compliance with guidelines and obligations regarding
82
+ information protection?
83
+ - source_sentence: 'The criteria used The purpose of processing This information must
84
+ be provided within 15 days from the date of the data subject’s request. vs GDPR
85
+ states that, when responding to an access request, a data controller must indicate
86
+ the following: The categories of personal data concerned The recipients or categories
87
+ of recipients to whom personal data have been disclosed to The retention period
88
+ The right to lodge a complaint with the supervisory authority The existence of
89
+ data transfers The existence of automated decision making The information must
90
+ be provided without undue delay and in any event within one month of the receipt
91
+ of the request. LGPD grants the right to data portability through an express request
92
+ and subject to commercial and industrial secrecy, pursuant to the regulation of
93
+ the controlling agency. This right, however, does not include data that has already
94
+ been anonymised by the controller. vs GDPR defines the right to'
95
+ sentences:
96
+ - What is considered an offense related to obstructing the OPC in an investigation?
97
+ - What does LGPD grant the right to in terms of data portability?
98
+ - How does automation aid in complying with data privacy regulations like the PDPO?
99
+ - source_sentence: 'uriti Research Team Published December 3, 2020 / Updated October
100
+ 3, 2023 On 1 December 2020, New Zealand’s new Privacy Act 2020 came into effect.
101
+ Our experts at Securiti have compiled the following list of compliance actions
102
+ to remind organizations of their obligations under New Zealand’s new Privacy Act.
103
+ ## 1\. Notify privacy breaches within 72 hours Organizations must notify privacy
104
+ breach that has caused serious harm to the affected individual or is likely to
105
+ do so, to the Privacy Commissioner and the affected individuals as soon as practicable
106
+ or within 72 hours after becoming aware of the breach. Where it is not reasonably
107
+ practicable to notify the affected individual or each member of a group of affected
108
+ individuals, organizations must notify the public in a manner that no individual
109
+ is identified. Companies that fail to notify privacy breaches without any reasonable
110
+ excuse would be liable on conviction to a fine not exceeding $10,000. ## 2\. Notify
111
+ privacy breaches caused by any'
112
+ sentences:
113
+ - When are controllers and data processors required to appoint a DPO according to
114
+ the PDP Law and state regulations in Indonesia?
115
+ - What is the time frame for notifying privacy breaches under New Zealand's new
116
+ Privacy Act?
117
+ - What rights do Colorado residents have over their personal data under the Colorado
118
+ Privacy Act?
119
+ - source_sentence: Careers View Events Spotlight Talks IDC Names Securiti a Worldwide
120
+ Leader in Data Privacy View Events Spotlight Talks Education Contact Us Schedule
121
+ a Demo Products By Use Cases By Roles Data Command Center View Learn more Asset
122
+ and Data Discovery Discover dark and native data assets Learn more Data Access
123
+ Intelligence & Governance Identify which users have access to sensitive data and
124
+ prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud
125
+ | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment |
126
+ Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover
127
+ & Classify Structured and Unstructured Data | People Data Graph Learn more Data
128
+ Flow Intelligence & Governance Prevent sensitive data sprawl through real-, Press
129
+ Releases View Careers View Events Spotlight Talks IDC Names Securiti a Worldwide
130
+ Leader in Data Privacy View Events Spotlight Talks Education Contact Us Schedule
131
+ a Demo Products By Use Cases By Roles Data Command Center View Learn more Asset
132
+ and Data Discovery Discover dark and native data assets Learn more Data Access
133
+ Intelligence & Governance Identify which users have access to sensitive data and
134
+ prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud
135
+ | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment |
136
+ Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover
137
+ & Classify Structured and Unstructured Data | People Data Graph Learn more Data
138
+ Flow Intelligence & Governance Prevent
139
+ sentences:
140
+ - What is the purpose of the Data Command Center?
141
+ - What are IBM's future prospects and preparedness for new business opportunities?
142
+ - What is the US California CCPA?
143
+ model-index:
144
+ - name: SentenceTransformer based on BAAI/bge-base-en-v1.5
145
+ results:
146
+ - task:
147
+ type: information-retrieval
148
+ name: Information Retrieval
149
+ dataset:
150
+ name: dim 768
151
+ type: dim_768
152
+ metrics:
153
+ - type: cosine_accuracy@1
154
+ value: 0.34845360824742266
155
+ name: Cosine Accuracy@1
156
+ - type: cosine_accuracy@3
157
+ value: 0.5855670103092784
158
+ name: Cosine Accuracy@3
159
+ - type: cosine_accuracy@5
160
+ value: 0.6701030927835051
161
+ name: Cosine Accuracy@5
162
+ - type: cosine_accuracy@10
163
+ value: 0.756701030927835
164
+ name: Cosine Accuracy@10
165
+ - type: cosine_precision@1
166
+ value: 0.34845360824742266
167
+ name: Cosine Precision@1
168
+ - type: cosine_precision@3
169
+ value: 0.1951890034364261
170
+ name: Cosine Precision@3
171
+ - type: cosine_precision@5
172
+ value: 0.13402061855670103
173
+ name: Cosine Precision@5
174
+ - type: cosine_precision@10
175
+ value: 0.0756701030927835
176
+ name: Cosine Precision@10
177
+ - type: cosine_recall@1
178
+ value: 0.34845360824742266
179
+ name: Cosine Recall@1
180
+ - type: cosine_recall@3
181
+ value: 0.5855670103092784
182
+ name: Cosine Recall@3
183
+ - type: cosine_recall@5
184
+ value: 0.6701030927835051
185
+ name: Cosine Recall@5
186
+ - type: cosine_recall@10
187
+ value: 0.756701030927835
188
+ name: Cosine Recall@10
189
+ - type: cosine_ndcg@10
190
+ value: 0.5507373799577976
191
+ name: Cosine Ndcg@10
192
+ - type: cosine_mrr@10
193
+ value: 0.4849337260677468
194
+ name: Cosine Mrr@10
195
+ - type: cosine_map@100
196
+ value: 0.4942402452655515
197
+ name: Cosine Map@100
198
+ - task:
199
+ type: information-retrieval
200
+ name: Information Retrieval
201
+ dataset:
202
+ name: dim 512
203
+ type: dim_512
204
+ metrics:
205
+ - type: cosine_accuracy@1
206
+ value: 0.3463917525773196
207
+ name: Cosine Accuracy@1
208
+ - type: cosine_accuracy@3
209
+ value: 0.5938144329896907
210
+ name: Cosine Accuracy@3
211
+ - type: cosine_accuracy@5
212
+ value: 0.668041237113402
213
+ name: Cosine Accuracy@5
214
+ - type: cosine_accuracy@10
215
+ value: 0.756701030927835
216
+ name: Cosine Accuracy@10
217
+ - type: cosine_precision@1
218
+ value: 0.3463917525773196
219
+ name: Cosine Precision@1
220
+ - type: cosine_precision@3
221
+ value: 0.1979381443298969
222
+ name: Cosine Precision@3
223
+ - type: cosine_precision@5
224
+ value: 0.13360824742268038
225
+ name: Cosine Precision@5
226
+ - type: cosine_precision@10
227
+ value: 0.07567010309278348
228
+ name: Cosine Precision@10
229
+ - type: cosine_recall@1
230
+ value: 0.3463917525773196
231
+ name: Cosine Recall@1
232
+ - type: cosine_recall@3
233
+ value: 0.5938144329896907
234
+ name: Cosine Recall@3
235
+ - type: cosine_recall@5
236
+ value: 0.668041237113402
237
+ name: Cosine Recall@5
238
+ - type: cosine_recall@10
239
+ value: 0.756701030927835
240
+ name: Cosine Recall@10
241
+ - type: cosine_ndcg@10
242
+ value: 0.5517739147624575
243
+ name: Cosine Ndcg@10
244
+ - type: cosine_mrr@10
245
+ value: 0.48604565537555244
246
+ name: Cosine Mrr@10
247
+ - type: cosine_map@100
248
+ value: 0.4956303541940711
249
+ name: Cosine Map@100
250
+ - task:
251
+ type: information-retrieval
252
+ name: Information Retrieval
253
+ dataset:
254
+ name: dim 256
255
+ type: dim_256
256
+ metrics:
257
+ - type: cosine_accuracy@1
258
+ value: 0.3422680412371134
259
+ name: Cosine Accuracy@1
260
+ - type: cosine_accuracy@3
261
+ value: 0.5670103092783505
262
+ name: Cosine Accuracy@3
263
+ - type: cosine_accuracy@5
264
+ value: 0.6618556701030928
265
+ name: Cosine Accuracy@5
266
+ - type: cosine_accuracy@10
267
+ value: 0.7484536082474227
268
+ name: Cosine Accuracy@10
269
+ - type: cosine_precision@1
270
+ value: 0.3422680412371134
271
+ name: Cosine Precision@1
272
+ - type: cosine_precision@3
273
+ value: 0.1890034364261168
274
+ name: Cosine Precision@3
275
+ - type: cosine_precision@5
276
+ value: 0.13237113402061854
277
+ name: Cosine Precision@5
278
+ - type: cosine_precision@10
279
+ value: 0.07484536082474226
280
+ name: Cosine Precision@10
281
+ - type: cosine_recall@1
282
+ value: 0.3422680412371134
283
+ name: Cosine Recall@1
284
+ - type: cosine_recall@3
285
+ value: 0.5670103092783505
286
+ name: Cosine Recall@3
287
+ - type: cosine_recall@5
288
+ value: 0.6618556701030928
289
+ name: Cosine Recall@5
290
+ - type: cosine_recall@10
291
+ value: 0.7484536082474227
292
+ name: Cosine Recall@10
293
+ - type: cosine_ndcg@10
294
+ value: 0.5412682955861301
295
+ name: Cosine Ndcg@10
296
+ - type: cosine_mrr@10
297
+ value: 0.475321551300933
298
+ name: Cosine Mrr@10
299
+ - type: cosine_map@100
300
+ value: 0.48455040697749474
301
+ name: Cosine Map@100
302
+ ---
303
+
304
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
305
+
306
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
307
+
308
+ ## Model Details
309
+
310
+ ### Model Description
311
+ - **Model Type:** Sentence Transformer
312
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
313
+ - **Maximum Sequence Length:** 512 tokens
314
+ - **Output Dimensionality:** 768 tokens
315
+ - **Similarity Function:** Cosine Similarity
316
+ <!-- - **Training Dataset:** Unknown -->
317
+ - **Language:** en
318
+ - **License:** apache-2.0
319
+
320
+ ### Model Sources
321
+
322
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
323
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
324
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
325
+
326
+ ### Full Model Architecture
327
+
328
+ ```
329
+ SentenceTransformer(
330
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
331
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
332
+ (2): Normalize()
333
+ )
334
+ ```
335
+
336
+ ## Usage
337
+
338
+ ### Direct Usage (Sentence Transformers)
339
+
340
+ First install the Sentence Transformers library:
341
+
342
+ ```bash
343
+ pip install -U sentence-transformers
344
+ ```
345
+
346
+ Then you can load this model and run inference.
347
+ ```python
348
+ from sentence_transformers import SentenceTransformer
349
+
350
+ # Download from the 🤗 Hub
351
+ model = SentenceTransformer("MugheesAwan11/bge-base-securiti-dataset-1-v20")
352
+ # Run inference
353
+ sentences = [
354
+ 'Careers View Events Spotlight Talks IDC Names Securiti a Worldwide Leader in Data Privacy View Events Spotlight Talks Education Contact Us Schedule a Demo Products By Use Cases By Roles Data Command Center View Learn more Asset and Data Discovery Discover dark and native data assets Learn more Data Access Intelligence & Governance Identify which users have access to sensitive data and prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-, Press Releases View Careers View Events Spotlight Talks IDC Names Securiti a Worldwide Leader in Data Privacy View Events Spotlight Talks Education Contact Us Schedule a Demo Products By Use Cases By Roles Data Command Center View Learn more Asset and Data Discovery Discover dark and native data assets Learn more Data Access Intelligence & Governance Identify which users have access to sensitive data and prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent',
355
+ 'What is the purpose of the Data Command Center?',
356
+ "What are IBM's future prospects and preparedness for new business opportunities?",
357
+ ]
358
+ embeddings = model.encode(sentences)
359
+ print(embeddings.shape)
360
+ # [3, 768]
361
+
362
+ # Get the similarity scores for the embeddings
363
+ similarities = model.similarity(embeddings, embeddings)
364
+ print(similarities.shape)
365
+ # [3, 3]
366
+ ```
367
+
368
+ <!--
369
+ ### Direct Usage (Transformers)
370
+
371
+ <details><summary>Click to see the direct usage in Transformers</summary>
372
+
373
+ </details>
374
+ -->
375
+
376
+ <!--
377
+ ### Downstream Usage (Sentence Transformers)
378
+
379
+ You can finetune this model on your own dataset.
380
+
381
+ <details><summary>Click to expand</summary>
382
+
383
+ </details>
384
+ -->
385
+
386
+ <!--
387
+ ### Out-of-Scope Use
388
+
389
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
390
+ -->
391
+
392
+ ## Evaluation
393
+
394
+ ### Metrics
395
+
396
+ #### Information Retrieval
397
+ * Dataset: `dim_768`
398
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
399
+
400
+ | Metric | Value |
401
+ |:--------------------|:-----------|
402
+ | cosine_accuracy@1 | 0.3485 |
403
+ | cosine_accuracy@3 | 0.5856 |
404
+ | cosine_accuracy@5 | 0.6701 |
405
+ | cosine_accuracy@10 | 0.7567 |
406
+ | cosine_precision@1 | 0.3485 |
407
+ | cosine_precision@3 | 0.1952 |
408
+ | cosine_precision@5 | 0.134 |
409
+ | cosine_precision@10 | 0.0757 |
410
+ | cosine_recall@1 | 0.3485 |
411
+ | cosine_recall@3 | 0.5856 |
412
+ | cosine_recall@5 | 0.6701 |
413
+ | cosine_recall@10 | 0.7567 |
414
+ | cosine_ndcg@10 | 0.5507 |
415
+ | cosine_mrr@10 | 0.4849 |
416
+ | **cosine_map@100** | **0.4942** |
417
+
418
+ #### Information Retrieval
419
+ * Dataset: `dim_512`
420
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
421
+
422
+ | Metric | Value |
423
+ |:--------------------|:-----------|
424
+ | cosine_accuracy@1 | 0.3464 |
425
+ | cosine_accuracy@3 | 0.5938 |
426
+ | cosine_accuracy@5 | 0.668 |
427
+ | cosine_accuracy@10 | 0.7567 |
428
+ | cosine_precision@1 | 0.3464 |
429
+ | cosine_precision@3 | 0.1979 |
430
+ | cosine_precision@5 | 0.1336 |
431
+ | cosine_precision@10 | 0.0757 |
432
+ | cosine_recall@1 | 0.3464 |
433
+ | cosine_recall@3 | 0.5938 |
434
+ | cosine_recall@5 | 0.668 |
435
+ | cosine_recall@10 | 0.7567 |
436
+ | cosine_ndcg@10 | 0.5518 |
437
+ | cosine_mrr@10 | 0.486 |
438
+ | **cosine_map@100** | **0.4956** |
439
+
440
+ #### Information Retrieval
441
+ * Dataset: `dim_256`
442
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
443
+
444
+ | Metric | Value |
445
+ |:--------------------|:-----------|
446
+ | cosine_accuracy@1 | 0.3423 |
447
+ | cosine_accuracy@3 | 0.567 |
448
+ | cosine_accuracy@5 | 0.6619 |
449
+ | cosine_accuracy@10 | 0.7485 |
450
+ | cosine_precision@1 | 0.3423 |
451
+ | cosine_precision@3 | 0.189 |
452
+ | cosine_precision@5 | 0.1324 |
453
+ | cosine_precision@10 | 0.0748 |
454
+ | cosine_recall@1 | 0.3423 |
455
+ | cosine_recall@3 | 0.567 |
456
+ | cosine_recall@5 | 0.6619 |
457
+ | cosine_recall@10 | 0.7485 |
458
+ | cosine_ndcg@10 | 0.5413 |
459
+ | cosine_mrr@10 | 0.4753 |
460
+ | **cosine_map@100** | **0.4846** |
461
+
462
+ <!--
463
+ ## Bias, Risks and Limitations
464
+
465
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
466
+ -->
467
+
468
+ <!--
469
+ ### Recommendations
470
+
471
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
472
+ -->
473
+
474
+ ## Training Details
475
+
476
+ ### Training Dataset
477
+
478
+ #### Unnamed Dataset
479
+
480
+
481
+ * Size: 494 training samples
482
+ * Columns: <code>positive</code> and <code>anchor</code>
483
+ * Approximate statistics based on the first 1000 samples:
484
+ | | positive | anchor |
485
+ |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
486
+ | type | string | string |
487
+ | details | <ul><li>min: 18 tokens</li><li>mean: 223.56 tokens</li><li>max: 414 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.87 tokens</li><li>max: 102 tokens</li></ul> |
488
+ * Samples:
489
+ | positive | anchor |
490
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------|
491
+ | <code>### Denmark #### Denmark **Effective Date** : May 25, 2018 **Region** : EMEA (Europe, Middle East, Africa) Similar to other EU countries, Denmark has enacted a data protection act for the purpose of implementing the GDPR in the country. The Danish Data Protection Act (Act No. 502 of 23 May 2018) was enacted for the protection of natural persons with respect to personal data processing and to regulate the free movement of personal data. The Act replaced the previous Danish Act on Processing of Personal Data (Act no. 429 of 31/05/2000). Under the new Act, the Danish Data Protection Authority (Datatilsynet) oversees all aspects related to the supervision and enforcement of the Data Protection Act and the GDPR within the country as well as representing Denmark in the European Data Protection Board. ### Finland #### Finland **Effective Date** : January 1, 2019 **Region** : EMEA (Europe</code> | <code>What is the role of the Danish Data Protection Authority in Denmark's implementation of the GDPR?</code> |
492
+ | <code>CPRA compliance involves adhering to the requirements outlined in the California Privacy Rights Act (CPRA) to protect consumer privacy and data rights. ## Join Our Newsletter Get all the latest information, law updates and more delivered to your inbox ### Share Copy 91 ### More Stories that May Interest You View More September 13, 2023 ## Kuwait's DPPR Kuwait didn’t have any data protection law until the Communication and Information Technology Regulatory Authority (CITRA) introduced the Data Privacy Protection Regulation (DPPR). The... View More September 11, 2023 ## Indonesia’s Protection of Personal Data Law: Explained In January 2020, Indonesia joined the burgeoning list of countries with their own data protection regulations. Provisions for data protection had existed within various... View More August 31, 2023 ##</code> | <code>Why is it important to comply with CPRA requirements and how does it protect data rights?</code> |
493
+ | <code>Data Access Intelligence & Governance Identify which users have access to sensitive data and prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-time streaming platforms Learn more Data Consent Automation First Party Consent | Third Party & Cookie Consent Learn more Data Security Posture Management Secure sensitive data in hybrid multicloud and SaaS environments Learn more Data Breach Impact Analysis & Response Analyze impact of a data breach and coordinate response per global regulatory obligations Learn more Data Catalog Automatically, Data Access Intelligence & Governance Identify which users have access to sensitive data and prevent unauthorized access Learn more Data Privacy Automation PrivacyCenter.Cloud | Data Mapping | DSR Automation | Assessment Automation | Vendor Assessment | Breach Management | Privacy Notice Learn more Sensitive Data Intelligence Discover & Classify Structured and Unstructured Data | People Data Graph Learn more Data Flow Intelligence & Governance Prevent sensitive data sprawl through real-time streaming platforms Learn more Data Consent Automation First Party Consent | Third Party & Cookie Consent Learn more Data Security Posture Management Secure sensitive data in hybrid multicloud and SaaS environments Learn more Data Breach Impact Analysis & Response Analyze impact of a data breach and coordinate response per global regulatory obligations Learn more Data Catalog Automatically catalog</code> | <code>What is the role of Vendor Assessment in securing and protecting sensitive data in Data Access Intelligence & Governance?</code> |
494
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
495
+ ```json
496
+ {
497
+ "loss": "MultipleNegativesRankingLoss",
498
+ "matryoshka_dims": [
499
+ 768,
500
+ 512,
501
+ 256
502
+ ],
503
+ "matryoshka_weights": [
504
+ 1,
505
+ 1,
506
+ 1
507
+ ],
508
+ "n_dims_per_step": -1
509
+ }
510
+ ```
511
+
512
+ ### Training Hyperparameters
513
+ #### Non-Default Hyperparameters
514
+
515
+ - `eval_strategy`: epoch
516
+ - `per_device_train_batch_size`: 32
517
+ - `per_device_eval_batch_size`: 16
518
+ - `learning_rate`: 2e-05
519
+ - `num_train_epochs`: 4
520
+ - `lr_scheduler_type`: cosine
521
+ - `warmup_ratio`: 0.1
522
+ - `bf16`: True
523
+ - `tf32`: True
524
+ - `load_best_model_at_end`: True
525
+ - `optim`: adamw_torch_fused
526
+ - `batch_sampler`: no_duplicates
527
+
528
+ #### All Hyperparameters
529
+ <details><summary>Click to expand</summary>
530
+
531
+ - `overwrite_output_dir`: False
532
+ - `do_predict`: False
533
+ - `eval_strategy`: epoch
534
+ - `prediction_loss_only`: True
535
+ - `per_device_train_batch_size`: 32
536
+ - `per_device_eval_batch_size`: 16
537
+ - `per_gpu_train_batch_size`: None
538
+ - `per_gpu_eval_batch_size`: None
539
+ - `gradient_accumulation_steps`: 1
540
+ - `eval_accumulation_steps`: None
541
+ - `learning_rate`: 2e-05
542
+ - `weight_decay`: 0.0
543
+ - `adam_beta1`: 0.9
544
+ - `adam_beta2`: 0.999
545
+ - `adam_epsilon`: 1e-08
546
+ - `max_grad_norm`: 1.0
547
+ - `num_train_epochs`: 4
548
+ - `max_steps`: -1
549
+ - `lr_scheduler_type`: cosine
550
+ - `lr_scheduler_kwargs`: {}
551
+ - `warmup_ratio`: 0.1
552
+ - `warmup_steps`: 0
553
+ - `log_level`: passive
554
+ - `log_level_replica`: warning
555
+ - `log_on_each_node`: True
556
+ - `logging_nan_inf_filter`: True
557
+ - `save_safetensors`: True
558
+ - `save_on_each_node`: False
559
+ - `save_only_model`: False
560
+ - `restore_callback_states_from_checkpoint`: False
561
+ - `no_cuda`: False
562
+ - `use_cpu`: False
563
+ - `use_mps_device`: False
564
+ - `seed`: 42
565
+ - `data_seed`: None
566
+ - `jit_mode_eval`: False
567
+ - `use_ipex`: False
568
+ - `bf16`: True
569
+ - `fp16`: False
570
+ - `fp16_opt_level`: O1
571
+ - `half_precision_backend`: auto
572
+ - `bf16_full_eval`: False
573
+ - `fp16_full_eval`: False
574
+ - `tf32`: True
575
+ - `local_rank`: 0
576
+ - `ddp_backend`: None
577
+ - `tpu_num_cores`: None
578
+ - `tpu_metrics_debug`: False
579
+ - `debug`: []
580
+ - `dataloader_drop_last`: False
581
+ - `dataloader_num_workers`: 0
582
+ - `dataloader_prefetch_factor`: None
583
+ - `past_index`: -1
584
+ - `disable_tqdm`: False
585
+ - `remove_unused_columns`: True
586
+ - `label_names`: None
587
+ - `load_best_model_at_end`: True
588
+ - `ignore_data_skip`: False
589
+ - `fsdp`: []
590
+ - `fsdp_min_num_params`: 0
591
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
592
+ - `fsdp_transformer_layer_cls_to_wrap`: None
593
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
594
+ - `deepspeed`: None
595
+ - `label_smoothing_factor`: 0.0
596
+ - `optim`: adamw_torch_fused
597
+ - `optim_args`: None
598
+ - `adafactor`: False
599
+ - `group_by_length`: False
600
+ - `length_column_name`: length
601
+ - `ddp_find_unused_parameters`: None
602
+ - `ddp_bucket_cap_mb`: None
603
+ - `ddp_broadcast_buffers`: False
604
+ - `dataloader_pin_memory`: True
605
+ - `dataloader_persistent_workers`: False
606
+ - `skip_memory_metrics`: True
607
+ - `use_legacy_prediction_loop`: False
608
+ - `push_to_hub`: False
609
+ - `resume_from_checkpoint`: None
610
+ - `hub_model_id`: None
611
+ - `hub_strategy`: every_save
612
+ - `hub_private_repo`: False
613
+ - `hub_always_push`: False
614
+ - `gradient_checkpointing`: False
615
+ - `gradient_checkpointing_kwargs`: None
616
+ - `include_inputs_for_metrics`: False
617
+ - `eval_do_concat_batches`: True
618
+ - `fp16_backend`: auto
619
+ - `push_to_hub_model_id`: None
620
+ - `push_to_hub_organization`: None
621
+ - `mp_parameters`:
622
+ - `auto_find_batch_size`: False
623
+ - `full_determinism`: False
624
+ - `torchdynamo`: None
625
+ - `ray_scope`: last
626
+ - `ddp_timeout`: 1800
627
+ - `torch_compile`: False
628
+ - `torch_compile_backend`: None
629
+ - `torch_compile_mode`: None
630
+ - `dispatch_batches`: None
631
+ - `split_batches`: None
632
+ - `include_tokens_per_second`: False
633
+ - `include_num_input_tokens_seen`: False
634
+ - `neftune_noise_alpha`: None
635
+ - `optim_target_modules`: None
636
+ - `batch_eval_metrics`: False
637
+ - `batch_sampler`: no_duplicates
638
+ - `multi_dataset_batch_sampler`: proportional
639
+
640
+ </details>
641
+
642
+ ### Training Logs
643
+ | Epoch | Step | Training Loss | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_768_cosine_map@100 |
644
+ |:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
645
+ | 0.625 | 10 | 3.7981 | - | - | - |
646
+ | 1.0 | 16 | - | 0.4653 | 0.4819 | 0.4810 |
647
+ | 1.25 | 20 | 2.2066 | - | - | - |
648
+ | 1.875 | 30 | 1.668 | - | - | - |
649
+ | 2.0 | 32 | - | 0.4837 | 0.4905 | 0.4933 |
650
+ | 2.5 | 40 | 0.9807 | - | - | - |
651
+ | **3.0** | **48** | **-** | **0.4846** | **0.4954** | **0.4949** |
652
+ | 3.125 | 50 | 1.0226 | - | - | - |
653
+ | 3.75 | 60 | 1.0564 | - | - | - |
654
+ | 4.0 | 64 | - | 0.4846 | 0.4956 | 0.4942 |
655
+
656
+ * The bold row denotes the saved checkpoint.
657
+
658
+ ### Framework Versions
659
+ - Python: 3.10.14
660
+ - Sentence Transformers: 3.0.1
661
+ - Transformers: 4.41.2
662
+ - PyTorch: 2.1.2+cu121
663
+ - Accelerate: 0.31.0
664
+ - Datasets: 2.19.1
665
+ - Tokenizers: 0.19.1
666
+
667
+ ## Citation
668
+
669
+ ### BibTeX
670
+
671
+ #### Sentence Transformers
672
+ ```bibtex
673
+ @inproceedings{reimers-2019-sentence-bert,
674
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
675
+ author = "Reimers, Nils and Gurevych, Iryna",
676
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
677
+ month = "11",
678
+ year = "2019",
679
+ publisher = "Association for Computational Linguistics",
680
+ url = "https://arxiv.org/abs/1908.10084",
681
+ }
682
+ ```
683
+
684
+ #### MatryoshkaLoss
685
+ ```bibtex
686
+ @misc{kusupati2024matryoshka,
687
+ title={Matryoshka Representation Learning},
688
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
689
+ year={2024},
690
+ eprint={2205.13147},
691
+ archivePrefix={arXiv},
692
+ primaryClass={cs.LG}
693
+ }
694
+ ```
695
+
696
+ #### MultipleNegativesRankingLoss
697
+ ```bibtex
698
+ @misc{henderson2017efficient,
699
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
700
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
701
+ year={2017},
702
+ eprint={1705.00652},
703
+ archivePrefix={arXiv},
704
+ primaryClass={cs.CL}
705
+ }
706
+ ```
707
+
708
+ <!--
709
+ ## Glossary
710
+
711
+ *Clearly define terms in order to be accessible across audiences.*
712
+ -->
713
+
714
+ <!--
715
+ ## Model Card Authors
716
+
717
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
718
+ -->
719
+
720
+ <!--
721
+ ## Model Card Contact
722
+
723
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
724
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e2533d12b465b80b90b124dcfc31764b6bc900bf8f45dbcc20cce79163120a5
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff