friederikebauer's picture
Update README.md
a245a12
|
raw
history blame
12.3 kB
---
language: de
library_name: sentence-transformers
tags:
- sentence-similarity
datasets: and-effect/mdk_gov_data_titles_clf
widget:
- source_sentence: "Bebauungspläne, vorhabenbezogene Bebauungspläne (Geltungsbereiche)"
sentences:
- "Fachkräfte für Glücksspielsuchtprävention und -beratung"
- "Tagespflege Altenhilfe"
- "Bebauungsplan der Innenentwicklung gem. § 13a BauGB - Ortskern Rütenbrock"
example_title: "Bebauungsplan"
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: musterdatenkatalog_clf
results:
- task:
type: text-classification
dataset:
name: and-effect/mdk_gov_data_titles_clf
type: and-effect/mdk_gov_data_titles_clf
split: test
revision: 172e61bb1dd20e43903f4c51e5cbec61ec9ae6e6
metrics:
- type: accuracy
value: 0.73
name: Accuracy 'Bezeichnung'
- type: precision
value: 0.66
name: Precision 'Bezeichnung' (macro)
- type: recall
value: 0.71
name: Recall 'Bezeichnung' (macro)
- type: f1
value: 0.67
name: F1 'Bezeichnung' (macro)
- type: accuracy
value: 0.89
name: Accuracy 'Thema'
- type: precision
value: 0.90
name: Precision 'Thema' (macro)
- type: recall
value: 0.89
name: Recall 'Thema' (macro)
- type: f1
value: 0.88
name: F1 'Thema' (macro)
---
# Model Card for Musterdatenkatalog Classifier
# Model Details
## Model Description
This model is based on [bert-base-german-cased](https://huggingface.co/bert-base-cased) and fine-tuned on [and-effect/mdk_gov_data_titles_clf](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf).
It was created as part of the Bertelsmann Foundation's Musterdatenkatalog (MDK) project (See their website [here](https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog)).
The main intent of the MDK project was to classify open data into a taxonomy to help give an overview of already published data.
It can help municipalities in Germany, as well as data analysts and journalists, to see which cities have already published data sets and what might be missing.
The project uses a taxonomy to classify the data and the model was specifically trained for the project and the classification task. It thus has a clear intended downstream task and should be used with the mentioned taxonomy.
**Information about the underlying taxonomy:**
The used taxonomy 'Musterdatenkatalog' has two levels: 'Thema' and 'Bezeichnung' which roughly translates to topic and label. There are 25 entries for the top level ranging from topics such as 'Finanzen' (finance) to 'Gesundheit' (health).
The second level, 'Bezeichnung' (label) goes into more detail and would for example contain 'Krankenhaus' (hospital) in the case of the topic being health. The second level contains 241 labels. The combination of topic and label (Thema + Bezeichnung) creates a 'Musterdatensatz'.
One can classify the data into the topics or the labels, results for both are presented down below. Although matching to other taxonomies is provdided in the published rdf version of the taxonomy (todo), the model is tailored to this taxonomy.
- **Developed by:** and-effect
- **Model type:** Text Classification
- **Language(s) (NLP):** de
- **Finetuned from model:** "bert-base-german-case. For more information one the model check on [this model card](https://huggingface.co/bert-base-german-cased)"
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Demo:** [More Information Needed]
# Direct Use
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Get Started with Sentence Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Get Started with HuggingFace Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
# Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The model is intended to classify open source dataset titles from german municipalities. The model is specifically tailored for this task and uses a specific taxonomy.
More information on the taxonomy (classification categories) and the Project can be found on the [project website](https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog).
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model has some limititations. The model has some limitations in terms of the downstream task.
1. **Distribution of classes**: The dataset trained on is small, but at the same time the number of classes is very high. Thus, for some classes there are only a few examples (more information about the class distribution of the training data can be found here). Consequently, the performance for smaller classes may not be as good as for the majority classes. Accordingly, the evaluation is also limited.
2. **Systematic problems**: some subjects could not be correctly classified systematically. One example is the embedding of titles containing 'Corona'. In none of the evaluation cases could the titles be embedded in such a way that they corresponded to their true names. Another systematic example is the embedding and classification of titles related to 'migration'.
3. **Generalization of the model**: by using semantic search, the model is able to classify titles into new categories that have not been trained, but the model is not tuned for this and therefore the performance of the model for unseen classes is likely to be limited.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
You can find all information about the training data [here](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf). For the Fine Tuning we used the revision 172e61bb1dd20e43903f4c51e5cbec61ec9ae6e6 of the data, since the performance was better with this previous version of the data.
## Training Procedure
### Preprocessing
This section describes the generating of the input data for the model. More information on the preprocessing of the data itself can be found [here](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf)
The model is fine tuned with similar and dissimilar pairs. Similar pairs are built with all titles and their true label. Dissimilar pairs defined as pairs of title and all labels, except the true label. Since the combinations of dissimilar is much higher, a sample of two pairs per title is selected.
| pairs | size |
|-----|-----|
| train_similar_pairs | 1964 |
| train_unsimilar_pairs | 982 |
| test_similar_pairs | 498 |
| test_unsimilar_pairs | 249 |
## Training Parameter
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader`
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Hyperparameter:
```
{
"epochs": 3,
"warmup_steps": 100,
}
```
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
All metrices express the models ability to classify dataset titles from GOVDATA into the taxonomy described [here](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf). For more information see VERLINKUNG MDK Projekt.
## Testing Data, Factors & Metrics
### Testing Data
The evaluation data can be found [here](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf). Since the model is trained on revision 172e61bb1dd20e43903f4c51e5cbec61ec9ae6e6 for evaluation, the evaluation metrics rely on the same revision.
### Metrics
The model performance is tested with four metrics: Accuracy, Precision, Recall and F1 Score. Although the data is imbalanced accuracy was still used as the imbalance accurately represents the tendency for more entries for some classes, for example 'Raumplanung - Bebauungsplan'.
A lot of classes were not predicted and are thus set to zero for the calculation of precision, recall and f1 score.
For these metrices additional calculations were performed. These are denoted with 'II' in the table and excluded the classes with less than two predictions for the level 'Bezeichnung'.
One must be careful when interpreting the results of these calculations though as they do not give any information about the classes left out.
The tasks denoted with 'I' include all classes.
The tasks are split not only into either including all classes ('I') or not ('II'), they are also divided into a task on 'Bezeichnung' or 'Thema'.
As previously mentioned this has to do with the underlying taxonomy. The task on 'Thema' is performed on the first level of the taxonomy with 25 classes, the task on 'Bezeichnung' is performed on the second level which has 241 classes.
## Results
| ***task*** | ***acccuracy*** | ***precision (macro)*** | ***recall (macro)*** | ***f1 (macro)*** |
|-----|-----|-----|-----|-----|
| Test dataset 'Bezeichnung' I | 0.73 (.82)* | 0.66 | 0.71 | 0.67 |
| Test dataset 'Thema' I | 0.89 (.92)* | 0.90 | 0.89 | 0.88 |
| Test dataset 'Bezeichnung' II | 0.73 | 0.58 | 0.82 | 0.65 |
| Validation dataset 'Bezeichnung' I | 0.51 | 0.35 | 0.36 | 0.33 |
| Validation dataset 'Thema' I | 0.77 | 0.59 | 0.68 | 0.60 |
| Validation dataset 'Bezeichnung' II | 0.51 | 0.58 | 0.69 | 0.59 |
\* the accuracy in brackets was calculated with a manual analysis. This was done to check for data entries that could for example be part of more than one class and thus were actually correctly classified by the algorithm.
In this step the correct labeling of the test data was also checked again for possible mistakes and resulted in a better performance.
The validation dataset was created manually to check certain classes
## Additional Information
### Licensing Information
CC BY 4.0