reza-aditya commited on
Commit
448aff6
1 Parent(s): e4aa045

Upload with huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +7 -7
  2. config.json +1 -1
  3. pytorch_model.bin +1 -1
  4. tokenizer_config.json +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ Then you can use the model like this:
28
  from sentence_transformers import SentenceTransformer
29
  sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
- model = SentenceTransformer('reza-aditya/distilroberta-base-sentence-transformer')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
@@ -85,22 +85,22 @@ The model was trained with the parameters:
85
 
86
  **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 3181 with parameters:
89
  ```
90
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
94
 
95
- `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
96
  ```
97
- {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
98
  ```
99
 
100
  Parameters of the fit()-Method:
101
  ```
102
  {
103
- "epochs": 10,
104
  "evaluation_steps": 0,
105
  "evaluator": "NoneType",
106
  "max_grad_norm": 1,
@@ -110,7 +110,7 @@ Parameters of the fit()-Method:
110
  },
111
  "scheduler": "WarmupLinear",
112
  "steps_per_epoch": null,
113
- "warmup_steps": 3181,
114
  "weight_decay": 0.01
115
  }
116
  ```
 
28
  from sentence_transformers import SentenceTransformer
29
  sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
+ model = SentenceTransformer('{MODEL_NAME}')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
 
85
 
86
  **DataLoader**:
87
 
88
+ `torch.utils.data.dataloader.DataLoader` of length 5625 with parameters:
89
  ```
90
+ {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
94
 
95
+ `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
96
  ```
97
+ {'scale': 20.0, 'similarity_fct': 'cos_sim'}
98
  ```
99
 
100
  Parameters of the fit()-Method:
101
  ```
102
  {
103
+ "epochs": 2,
104
  "evaluation_steps": 0,
105
  "evaluator": "NoneType",
106
  "max_grad_norm": 1,
 
110
  },
111
  "scheduler": "WarmupLinear",
112
  "steps_per_epoch": null,
113
+ "warmup_steps": 1125,
114
  "weight_decay": 0.01
115
  }
116
  ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "distilroberta-base",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "/root/.cache/torch/sentence_transformers/reza-aditya_distilroberta-base-sentence-transformer/",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b2b88fe69d71be76d3fa820769ed343f42ce24604304f8b0ca1b2d61ceb2d54
3
  size 328511153
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:271865305016d69a973b13e425b956b4c301a4862d531320b8d1901b9b5a5fe9
3
  size 328511153
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "distilroberta-base",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "/root/.cache/torch/sentence_transformers/reza-aditya_distilroberta-base-sentence-transformer/",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,