Rahka commited on
Commit
6cda1c9
1 Parent(s): bdf1be2

README update with new jinja template

Browse files
Files changed (1) hide show
  1. README.md +173 -51
README.md CHANGED
@@ -1,21 +1,57 @@
1
  ---
2
- pipeline_tag: sentence-similarity
 
3
  tags:
4
- - sentence-transformers
5
- - feature-extraction
6
- - sentence-similarity
7
- - transformers
8
-
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # {MODEL_NAME}
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- <!--- Describe your model here -->
16
 
17
- ## Usage (Sentence-Transformers)
 
 
18
 
 
 
 
 
 
19
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
 
21
  ```
@@ -33,9 +69,7 @@ embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
35
 
36
-
37
-
38
- ## Usage (HuggingFace Transformers)
39
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
40
 
41
  ```python
@@ -71,56 +105,144 @@ print("Sentence embeddings:")
71
  print(sentence_embeddings)
72
  ```
73
 
 
74
 
 
75
 
76
- ## Evaluation Results
 
77
 
78
- <!--- Describe how your model was evaluated -->
79
 
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
 
 
 
 
 
82
 
83
- ## Training
84
- The model was trained with the parameters:
85
 
86
- **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 190 with parameters:
89
- ```
90
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
- ```
92
 
93
- **Loss**:
94
 
95
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
96
 
97
- Parameters of the fit()-Method:
98
- ```
99
- {
100
- "epochs": 3,
101
- "evaluation_steps": 0,
102
- "evaluator": "NoneType",
103
- "max_grad_norm": 1,
104
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
105
- "optimizer_params": {
106
- "lr": 2e-05
107
- },
108
- "scheduler": "WarmupLinear",
109
- "steps_per_epoch": null,
110
- "warmup_steps": 100,
111
- "weight_decay": 0.01
112
- }
113
- ```
114
 
 
115
 
116
- ## Full Model Architecture
117
- ```
118
- SentenceTransformer(
119
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
120
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
- )
122
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
- ## Citing & Authors
125
 
126
- <!--- Describe where people can find more information -->
 
1
  ---
2
+ language: de
3
+ library_name: sentence_transformers
4
  tags:
5
+ - text-classification
6
+ model-index:
7
+ - name: and-effect/musterdatenkatalog_clf
8
+ results:
9
+ - task:
10
+ type: text-classification
11
+ dataset:
12
+ name: mdk_gov_data_titles_clf
13
+ type: and-effect/mdk_gov_data_titles_clf
14
+ metrics:
15
+ - type: Accuracy (Bezeichnung)
16
+ value: 0.7
17
+ - type: Precision macro (Bezeichnung)
18
+ value: 0.5
19
  ---
20
 
21
+ # Model Card for Model ID
22
 
23
+ <!-- Provide a quick summary of what the model is/does. -->
24
+
25
+
26
+
27
+ # Model Details
28
+
29
+ ## Model Description
30
+
31
+ <!-- Provide a longer summary of what this model is. -->
32
+
33
+ This model is based on bert-base-german-cased and fine-tuned on and-effect/mdk_gov_data_titles_clf. This model reaches and accuracy of XY on the test set and XY on the validation set
34
+
35
+ - **Developed by:** and-effect
36
+ - **Shared by [optional]:** [More Information Needed]
37
+ - **Model type:** Text Classification
38
+ - **Language(s) (NLP):** de
39
+ - **License:** XY
40
+ - **Finetuned from model [optional]:** bert-base-german-case. For more information one the model check on [this model card](https://huggingface.co/bert-base-german-cased)
41
+
42
+ ## Model Sources [optional]
43
 
44
+ <!-- Provide the basic links for the model. -->
45
 
46
+ - **Repository:** XY git hub repo?
47
+ - **Paper [optional]:** XY and-effect papers?
48
+ - **Demo [optional]:** XY Spaces?
49
 
50
+ # Direct Use
51
+
52
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
53
+
54
+ ## Get Started with Sentence Transformers
55
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
56
 
57
  ```
 
69
  print(embeddings)
70
  ```
71
 
72
+ ## Get Started with HuggingFace Transformers
 
 
73
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
74
 
75
  ```python
 
105
  print(sentence_embeddings)
106
  ```
107
 
108
+ # Downstream Use
109
 
110
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
111
 
112
+ The model is intended to classify open source dataset titles from german municipalities. More information on the Taxonomy (classification categories) and the Project can be found on XY.
113
+ For more information see Github Repo + Spaces
114
 
115
+ # Bias, Risks, and Limitations
116
 
117
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
118
 
119
+ The model has some limititations. The model has some limitations in terms of the downstream task.
120
+ 1. **Distribution of classes**: The dataset trained on is small, but at the same time the number of classes is very high. Thus, for some classes there are only a few examples (more information about the class distribution of the training data can be found here). Consequently, the performance for smaller classes may not be as good as for the majority classes. Accordingly, the evaluation is also limited.
121
+ 2. **Systematic problems**: some subjects could not be correctly classified systematically. One example is the embedding of titles containing 'Corona'. In none of the evaluation cases could the titles be embedded in such a way that they corresponded to their true names. Another systematic example is the embedding and classification of titles related to 'migration'.
122
+ 3. **Generalization of the model**: by using semantic search, the model is able to classify titles into new categories that have not been trained, but the model is not tuned for this and therefore the performance of the model for unseen classes is likely to be limited.
123
 
124
+ ## Recommendations
 
125
 
126
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
127
 
128
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
129
 
130
+ # Training Details
131
 
132
+ ## Training Data
133
 
134
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
+ [More Information Needed]
137
 
138
+ ## Training Procedure [optional]
139
+
140
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
141
+
142
+ ### Preprocessing
143
+
144
+ [More Information Needed]
145
+
146
+ ### Speeds, Sizes, Times
147
+
148
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
149
+
150
+ [More Information Needed]
151
+
152
+ # Evaluation
153
+
154
+ <!-- This section describes the evaluation protocols and provides the results. -->
155
+
156
+ ## Testing Data, Factors & Metrics
157
+
158
+ ### Testing Data
159
+
160
+ <!-- This should link to a Data Card if possible. -->
161
+
162
+ [More Information Needed]
163
+
164
+ ### Factors
165
+
166
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
167
+
168
+ [More Information Needed]
169
+
170
+ ### Metrics
171
+
172
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
173
+
174
+ [More Information Needed]
175
+
176
+ ## Results
177
+
178
+ [More Information Needed]
179
+
180
+ ### Summary
181
+
182
+
183
+
184
+ # Model Examination [optional]
185
+
186
+ <!-- Relevant interpretability work for the model goes here -->
187
+
188
+ [More Information Needed]
189
+
190
+ # Environmental Impact
191
+
192
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
193
+
194
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
195
+
196
+ - **Hardware Type:** [More Information Needed]
197
+ - **Hours used:** [More Information Needed]
198
+ - **Cloud Provider:** [More Information Needed]
199
+ - **Compute Region:** [More Information Needed]
200
+ - **Carbon Emitted:** [More Information Needed]
201
+
202
+ # Technical Specifications [optional]
203
+
204
+ ## Model Architecture and Objective
205
+
206
+ [More Information Needed]
207
+
208
+ ## Compute Infrastructure
209
+
210
+ [More Information Needed]
211
+
212
+ ### Hardware
213
+
214
+ [More Information Needed]
215
+
216
+ ### Software
217
+
218
+ [More Information Needed]
219
+
220
+ # Citation [optional]
221
+
222
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
223
+
224
+ **BibTeX:**
225
+
226
+ [More Information Needed]
227
+
228
+ **APA:**
229
+
230
+ [More Information Needed]
231
+
232
+ # Glossary [optional]
233
+
234
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
235
+
236
+ [More Information Needed]
237
+
238
+ # More Information [optional]
239
+
240
+ [More Information Needed]
241
+
242
+ # Model Card Authors [optional]
243
+
244
+ [More Information Needed]
245
 
246
+ # Model Card Contact
247
 
248
+ [More Information Needed]