djovak commited on
Commit
d407180
1 Parent(s): b7cf012

add model v2, finetuned with triplets

Browse files
README.md CHANGED
@@ -1 +1,56 @@
1
- yotta-embeddings
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+
8
+ ---
9
+
10
+ # {MODEL_NAME}
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ <!--- Describe your model here -->
15
+
16
+ ## Usage (Sentence-Transformers)
17
+
18
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
+
20
+ ```
21
+ pip install -U sentence-transformers
22
+ ```
23
+
24
+ Then you can use the model like this:
25
+
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ sentences = ["This is an example sentence", "Each sentence is converted"]
29
+
30
+ model = SentenceTransformer('{MODEL_NAME}')
31
+ embeddings = model.encode(sentences)
32
+ print(embeddings)
33
+ ```
34
+
35
+
36
+
37
+ ## Evaluation Results
38
+
39
+ <!--- Describe how your model was evaluated -->
40
+
41
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
42
+
43
+
44
+
45
+ ## Full Model Architecture
46
+ ```
47
+ SentenceTransformer(
48
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
49
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
50
+ (2): Normalize()
51
+ )
52
+ ```
53
+
54
+ ## Citing & Authors
55
+
56
+ <!--- Describe where people can find more information -->
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "yotta-test-128-bigdata2/checkpoint-14000",
3
  "architectures": [
4
  "BertModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "yotta-test-128-bigdata2-loss2.0-finetune1.0/checkpoint-100",
3
  "architectures": [
4
  "BertModel"
5
  ],
global_step14000/mp_rank_00_model_states.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:da7bcff7df3cae31544b727296c0c4e366f0c77d7bf872395d278c6905e1999f
3
- size 66781164
 
 
 
 
global_step14000/zero_pp_rank_0_mp_rank_00_optim_states.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:09dd52c57550d2f22982b25ee5106adde4b612a4690990255e1c7a5eb2ee9884
3
- size 100084013
 
 
 
 
global_step14000/zero_pp_rank_1_mp_rank_00_optim_states.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:430d54df842f13fedbe17de91bedf586c8fcc969ed683cef5b8e29b9a20560ab
3
- size 100087981
 
 
 
 
global_step14000/zero_pp_rank_2_mp_rank_00_optim_states.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:422a7985728e4656c23d96e2faee5739c1a8e152aba740e1772b443d821cec46
3
- size 100090541
 
 
 
 
global_step14000/zero_pp_rank_3_mp_rank_00_optim_states.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5276654d01bb88d305dd050598dec2271325d15e08a297965c60e1a160e9f30f
3
- size 100089901
 
 
 
 
latest CHANGED
@@ -1 +1 @@
1
- global_step14000
 
1
+ global_step100
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:24e3676999749376499291e24ffbfbdd3b14d59343ee006899703d1b2ce06954
3
  size 133462128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d130ee6fd0f15a672569924be1797044dcc5f864c09532a9c17fa063fec47ef
3
  size 133462128
rng_state_0.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:999e6df340f1e9b8c1f052c61186c25fd6ad49a1d9ff923a955da3edc3c24c1d
3
  size 15024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a5932083714909aaa1e5c386bd929068ebf36c4db10eca7139c64a481b97d75
3
  size 15024
rng_state_1.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:20207df83079aea6378b36b56761d185cc74bcbe6464cce19adcb4a752bc49f2
3
  size 15024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f6f9b48e4880be5f806135b0fd9e1347e74d39961b1f9900767bd2852d1111c
3
  size 15024
rng_state_2.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34c7315fcc0337ee725653181d4c050c54fe6e9e0fd69367fd11e6b2a846d997
3
  size 15024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17436b7dc89fb33efb07cdbd23a57485ff0aa9ff0a03bba924a798ebbdf4df19
3
  size 15024
rng_state_3.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c204d50fd5192eb941fcb11bf7696043711231366b45889b9d26e7b1aadcc5ca
3
  size 15024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:855807fb2a46c376ac722ab7b0960f43ccfe5c5286ca63b910a7a2def25e70b6
3
  size 15024
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 150,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 512,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9fc149063e6c2e339d889acda094053fa002e2b335ee64d36c6b402cdd0e2417
3
  size 6264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22c102af02a6d9034289b9221b868d623d9e167d791ba81249d54c6df4c83934
3
  size 6264