w11wo commited on
Commit
702a08b
1 Parent(s): bd70c54

Updated Model

Browse files
1_Pooling/config.json CHANGED
@@ -3,5 +3,7 @@
3
  "pooling_mode_cls_token": false,
4
  "pooling_mode_mean_tokens": true,
5
  "pooling_mode_max_tokens": false,
6
- "pooling_mode_mean_sqrt_len_tokens": false
 
 
7
  }
 
3
  "pooling_mode_cls_token": false,
4
  "pooling_mode_mean_tokens": true,
5
  "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false
9
  }
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
 
9
  ---
10
 
11
- # SimCSE-IndoBERT Base
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
@@ -85,7 +85,7 @@ The model was trained with the parameters:
85
 
86
  **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 782 with parameters:
89
  ```
90
  {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
@@ -120,7 +120,7 @@ Parameters of the fit()-Method:
120
  ```
121
  SentenceTransformer(
122
  (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
123
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
124
  )
125
  ```
126
 
 
8
 
9
  ---
10
 
11
+ # LazarusNLP/simcse-indobert-base
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
 
85
 
86
  **DataLoader**:
87
 
88
+ `torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
89
  ```
90
  {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
 
120
  ```
121
  SentenceTransformer(
122
  (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
123
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
124
  )
125
  ```
126
 
config_sentence_transformers.json CHANGED
@@ -2,6 +2,6 @@
2
  "__version__": {
3
  "sentence_transformers": "2.2.2",
4
  "transformers": "4.29.2",
5
- "pytorch": "2.0.1+cu118"
6
  }
7
  }
 
2
  "__version__": {
3
  "sentence_transformers": "2.2.2",
4
  "transformers": "4.29.2",
5
+ "pytorch": "2.0.1+cu117"
6
  }
7
  }
eval/similarity_evaluation_results.csv CHANGED
@@ -1,2 +1,2 @@
1
  epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
2
- 0,-1,0.6210425880472122,0.6154181874927932,0.6325135591051501,0.629094195339931,0.6314686880278273,0.6279293037763658,0.3827246918814826,0.36823825371127544
 
1
  epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
2
+ 0,-1,0.6891775818786484,0.6823645991985652,0.7090120719350553,0.7012689286635643,0.7086053381965113,0.7013742069737221,0.543402261253142,0.5333750931314221
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8137eb18a9480e136dd4a1c247532f6925575b76792f73c7699c8ae10803a4ea
3
  size 497836589
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b45e3d401265065fd380772e6ca15aee0203b37d84b9bb25ce507c16bf4105e7
3
  size 497836589