Thomas Müller commited on
Commit
a1ca4e6
1 Parent(s): a623882

Adjusts model card.

Browse files
Files changed (1) hide show
  1. README.md +11 -61
README.md CHANGED
@@ -1,6 +1,12 @@
1
  ---
 
 
 
 
 
2
  pipeline_tag: sentence-similarity
3
  tags:
 
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
@@ -9,9 +15,12 @@ tags:
9
 
10
  # {MODEL_NAME}
11
 
12
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
14
- <!--- Describe your model here -->
 
 
 
15
 
16
  ## Usage (Sentence-Transformers)
17
 
@@ -33,7 +42,6 @@ print(embeddings)
33
  ```
34
 
35
 
36
-
37
  ## Usage (HuggingFace Transformers)
38
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
39
 
@@ -69,61 +77,3 @@ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']
69
  print("Sentence embeddings:")
70
  print(sentence_embeddings)
71
  ```
72
-
73
-
74
-
75
- ## Evaluation Results
76
-
77
- <!--- Describe how your model was evaluated -->
78
-
79
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
80
-
81
-
82
- ## Training
83
- The model was trained with the parameters:
84
-
85
- **DataLoader**:
86
-
87
- `zsde.training.NoDuplicatesDataLoader` of length 75000 with parameters:
88
- ```
89
- {'batch_size': 16}
90
- ```
91
-
92
- **Loss**:
93
-
94
- `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
95
- ```
96
- {'scale': 20.0, 'similarity_fct': 'cos_sim'}
97
- ```
98
-
99
- Parameters of the fit()-Method:
100
- ```
101
- {
102
- "callback": null,
103
- "epochs": 1,
104
- "evaluation_steps": 7500,
105
- "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
106
- "max_grad_norm": 1,
107
- "optimizer_class": "<class 'transformers.optimization.AdamW'>",
108
- "optimizer_params": {
109
- "lr": 2e-05
110
- },
111
- "scheduler": "WarmupLinear",
112
- "steps_per_epoch": 75000,
113
- "warmup_steps": 7500,
114
- "weight_decay": 0.01
115
- }
116
- ```
117
-
118
-
119
- ## Full Model Architecture
120
- ```
121
- SentenceTransformer(
122
- (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
123
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
124
- )
125
- ```
126
-
127
- ## Citing & Authors
128
-
129
- <!--- Describe where people can find more information -->
 
1
  ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - SNLI
6
+ - MNLI
7
  pipeline_tag: sentence-similarity
8
  tags:
9
+ - zero-shot-classification
10
  - sentence-transformers
11
  - feature-extraction
12
  - sentence-similarity
 
15
 
16
  # {MODEL_NAME}
17
 
18
+ A Siamese network model trained for zero-shot and few-shot text classification.
19
 
20
+ The base model is [mpnet-base](https://huggingface.co/microsoft/mpnet-base).
21
+ It was trained on [SNLI](https://nlp.stanford.edu/projects/snli/) and [MNLI](https://cims.nyu.edu/~sbowman/multinli/).
22
+
23
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space.
24
 
25
  ## Usage (Sentence-Transformers)
26
 
 
42
  ```
43
 
44
 
 
45
  ## Usage (HuggingFace Transformers)
46
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
47
 
 
77
  print("Sentence embeddings:")
78
  print(sentence_embeddings)
79
  ```