legacy107 commited on
Commit
1122846
1 Parent(s): dac9e4a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -13
README.md CHANGED
@@ -5,15 +5,14 @@ tags:
5
  - feature-extraction
6
  - sentence-similarity
7
  - transformers
8
- datasets:
9
- - legacy107/qa_wikipedia_sentence_transformer
10
  ---
11
 
12
- # multi-qa-mpnet-base-dot-v1-wikipedia-search
13
 
14
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
15
 
16
- This is a fine-tuned version on [legacy107/qa_wikipedia_sentence_transformer](https://huggingface.co/datasets/legacy107/qa_wikipedia_sentence_transformer)
17
 
18
  ## Usage (Sentence-Transformers)
19
 
@@ -73,13 +72,7 @@ print(sentence_embeddings)
73
 
74
  ## Evaluation Results
75
 
76
- Result at step 16000
77
-
78
- | accuracy_cosinus | accuracy_manhattan | accuracy_euclidean |
79
- | -------- | -------- | -------- |
80
- | 0.986408328513592 | 0.986697513013302 | 0.986408328513592 |
81
-
82
-
83
 
84
  For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
85
 
@@ -89,7 +82,7 @@ The model was trained with the parameters:
89
 
90
  **DataLoader**:
91
 
92
- `torch.utils.data.dataloader.DataLoader` of length 3468 with parameters:
93
  ```
94
  {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
95
  ```
@@ -114,7 +107,7 @@ Parameters of the fit()-Method:
114
  },
115
  "scheduler": "WarmupLinear",
116
  "steps_per_epoch": null,
117
- "warmup_steps": 1734,
118
  "weight_decay": 0.01
119
  }
120
  ```
 
5
  - feature-extraction
6
  - sentence-similarity
7
  - transformers
8
+
 
9
  ---
10
 
11
+ # {MODEL_NAME}
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
15
+ <!--- Describe your model here -->
16
 
17
  ## Usage (Sentence-Transformers)
18
 
 
72
 
73
  ## Evaluation Results
74
 
75
+ <!--- Describe how your model was evaluated -->
 
 
 
 
 
 
76
 
77
  For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
78
 
 
82
 
83
  **DataLoader**:
84
 
85
+ `torch.utils.data.dataloader.DataLoader` of length 3746 with parameters:
86
  ```
87
  {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
88
  ```
 
107
  },
108
  "scheduler": "WarmupLinear",
109
  "steps_per_epoch": null,
110
+ "warmup_steps": 1873,
111
  "weight_decay": 0.01
112
  }
113
  ```